We do teach these things, they are just not core CS topics, but rather in other areas, relegated to electives like a software engineering course. At CMU we have entire Master's program for software engineering and an entire PhD program (in my department). We teach exactly the kinds of things the blog post is about, and more. Software Engineering is a whole field, a whole discipline.
I get that this is a blog post and needfully short, but yes, there are courses that teach these skills. There's a big disconnect between CS and SE in general, but it's not as bad as "no one teaches how to build quality software". We do work on this.
We were taught these things at the Bachelor's program in CS I went to in Sweden. At my first job I then slipped on a banana peel into a de facto tech lead position within a year, and I don't think it was due to my inherent greatness but rather that I was taught software engineering and the other colleagues at my level had not.
Ironically, the software engineering courses were the only ones I disliked while a student. An entire course in design patterns where strict adherence to UML was enforced felt a bit archaic. We had a course in software QA which mostly consisted of learning TDD and the standard tooling in the Java ecosystem, with some cursory walkthroughs of other types of QA like static analysis, fuzzing and performance testing. At the time it felt so boring, I liked to actually build stuff! A couple of years later I joined a team with very competent, CS-educated developers tasked to set up a workflow and all the tooling for testing software with high security requirements. They were dumbfounded when I knew what all the stuff they were tasked to do was!
There's a massive gap between taught at CMU and taught at most universities. And even if it is taught, it's usually outdated or focused on very literal stuff like how to write web applications. I'd have killed for a class that actually focuses on implementation, on teamwork, on building complicated systems.
Almost every CS course I took went the other way and had strict cheating policies that essentially made any group work verboten. There was 1 group project in 1 course I took in 4 years.
My spouse on the other hand took an explicitly IT program and they had group projects, engaging with real world users, building real solutions, etc.
That's crazy! My uni had senior design projects, and labs. Shoot, my community college had those, too. Most of our classes had lab projects, and sometimes the lab projects were group projects. We were encouraged to help each other out. I can't imagine a class where collaboration was totally against the rules.
And I mean, that went with everything. In my EE classes we did lots of collaboration. We had study groups, homework groups, etc. It was a lot of fun. I'm sad to hear there are places where they straight up make that against the rules.
My uni also had engineering ethics classes - in my major it was mandatory to take 2 of them. I think it makes sense and should be more common for software engineers. A lot of software is used to operate planes, cars, medical equipment, and nowadays also help make decisions that can have life-threatening consequences.
> strict cheating policies that essentially made any group work verboten
If I had to guess, some polytechnic school or another?
With some classes even forbidding discussing work with other students, where each assignment required a signed (digitally or otherwise) affidavit listing everyone you consulted, acknowledging that if you actually listed anyone, you were admitting to violating the academic honesty policies and if you didn't list anyone yet had spoken with others you were of course also violating the academic honesty policies.
Where only consulting the professors or TAs was allowed, the TAs were never around, and the professors refused to help because if they gave you any hints, it would apparently give away the answer, which would be unfair to the other students.
I had students that copied one-another's work; in fact I had few students that didn't copy. It made it impossible to mark their projects correctly, so I asked my more-experienced colleagues.
The best advice I got was to explain the difference between collaboration (strongly encouraged) and plagiarism (re-take the course if you're lucky). Forbidding collaboration is a disastrous policy, so an instructor shouldn't have a problem with giving the same mark to each member of a group of collaborators. You just have to test that each individual can explain what they've submitted.
My school auto-graded everything (file drop your code to a server by midnight & get a score back). I don't recall a single instance of TA/professor written or verbal feedback on code.
Yuh. I guess that's "modern times". I taught in the late eighties, and auto-grading wasn't a thing.
FWIW, I was a "temporary visiting lecturer", i.e. contract fill-in help. I had to write the syllabus (literally), the course plan, and the hand-outs. I also had to write all the tests and the exams; and I had to mark the exams, and then sit on the exam board through the summer holidays. The pretty girls would flutter their eyelashes at me, and ask for leniency. I suspect they took their cue from my louche colleague, who was evidently happy to take sexual advantage of his students in exchange for better marks.
[Edit] I liked this colleague; but I would not have introduced him to my wife or my daughter.
This was in the 1980's. We're talking 5 1/4" floppies, no internet, 64MB RAM. I had to review my students' work as dead-tree submissions or completed circuits or whatever.
(I certainly wasn't going to take digital submissions from them; that would mean floppy disks, and I regarded any disk that had been touched by any student as if it were infected with plague, because it almost certainly was. All the school systems were always infected).
I taught a course "in a previous life" and while I wasn't anything close to as strict as you say here, I can tell you the flip side: students would copy, modify superficially (some, less superficially) and then claim "it's my work, we just talked, that's why the similarities!" (with some even having the nerve to say, "it's not copied, look, plagiarism detection software says similarity is less than x%!)
Perhaps I was wrong but I really wanted the students who took the course to put in the work themselves. Just choose a different course/course set, you _knew_ this one was going to be hard!
So yeah, the guideline was that we'll be eventual judges of what was "too similar" and if you're concerned, just don't discuss implementation details with anyone. I realize it prevents honest colaboration, and that's bad too... but sometimes it's a "damned if you do, damned if you don't" kind of situation.
What we did when correcting the homework is compare the signature of the assembly output (not manually of course). You can move functions around, rename them, change the names of variables, .... but the signature of the instructions remains the same.
We caught 2 guys, of course we didn't know who copied from whom, but we quickly found out by challenging each of them with a fizz-buzz kind of question.
Also, for whatever my purely anecdotal experience is worth, I'd say the general quality of a professor's teaching was negatively correlated with how hard-ass they were about anti-collaboration policies. That also held true for a couple classes I'd dropped and retaken with a different prof.
One might think I'm just rating easier professors higher, but no, there were definitely enough sub-par teachers with lax policies and forgiving grading who failed to impart what their course was supposed to cover. There were also tough professors I learned a lot from. It's just that I can't recall a single decent instructor among those obsessed with anti-collaboration policies.
I've taken course work at 6+ different universities, and in my experience different groups of international students have very different perspectives on what is cheating vs. collaboration. I think it's likely attributed to the western ideal of the individual vs. the collective.
My university did this - each semester's class inherited the code base the last semester had worked on.
As a learning tool it was a disaster. The code had never been touched by a skilled engineer, just years of undergrads pulling all-nighters.
That meant it didn't teach what good maintenance looked like. Students just piled their hacks on top of someone else's hacks so they could hit the deadline.
It wasn't a good way to learn to write quality software (even if reality sometimes looks this way).
It was also the only software engineering course in our CS curriculum.
Not each week, but at my university, they had enough focus on the science part that most semester long solo projects were semester long and solo only in name. Students were assigned some component or incremental stage of some larger project spanning years and multiple PhDs, not some assignment for assignment's sake. That certainly did not make them perfect learning environments because students were left to their own in how they solve cooperation (postgrads running those macro projects might be considered PO, but not technical leads, and just as clueless as the rest) but the "pass on to the next each week" approach would have exactly the same gaps.
Me too. All throughout my education, group projects and speeches were both these scary, uncommon things that you could afford to fuck up, and I skated by on individual talent.
Now I'm an adult, several years into my career, wishing that the schools had done a bigger quantity of low-stakes group projects instead. It's a muscle that was never exercised
that would be a cool semester long project assignment: everybody has to plan/architect their own software project, and then work on implementing them, but you don't work on your own project, that you just manage.
This just isn't realistic in an general CS undergrad though. Students don't have the time or foundations to build a realistic complicated system, and the institutions don't have skills or currency to teach them how. You complain about content being out of date, but the foundations of quality software are not by and large technical nor go out of style. The implementations sure do.
What you're asking for is where vocational (2 year) programs really shine, and one of the few places where I've seen bootcamp graduates have an advantage. Unfortunately both then lack some really valuable underpinnings, example: relational algebra. It seems that once again there is no silver bullet for replacing time and experience.
In software engineering schools in France, both in my personal experience and other Bac+5 people I’ve hired, the entire Bac+4 year was dedicated to group projects. When you have 8 group projects in parallel at any time, it does teach team work, management, quality control, all the good stuff.
After the first failed project, everyone agrees that they need a boss without assigned duties. Then in 5th year, we’d have one or two class-wide projects (20-30 people).
This, along with joining event-organizing charities on the campus, where you’d work 3-5hrs a week to build a hundred-thousand-to-a-million dollars event (NGOs on campus, or concerts, or a sports meeting, to each their preferences, but it was the culture). And it wasn’t always a good time together, it’s not flowers all the way down.
I’m only surprised some school might consider engineering without group effort. On the other hand, I deeply deeply regret spending this much time in charities and not enough dating, as I have never been recognized for charity service, and family was important to me.
Yeah. A fair bit of this is just “people working in teams” stuff, that people that buy into ‘developer exceptionalism’ will tell you is sacred software developer knowledge. It isn’t.
Software engineering isn’t just about teamwork, and not all software development-related teamwork skill is generalisable to other industries, but it’s far from uncommon for there to be some trendy blog post laying out the sorts of things that, yes, an MBA program will teach someone. Which is fine, if not for the fact that these same people will scoff at “clueless MBAs”.
Do they funnel soon to be grads into stressful zoom calls where product managers handwave an entire legacy stack still somehow running on coldfusion and want a rebrand with 'AI' starting Jan 1??
No, but our professor assigned teams at random, gave us a buggy spec, and then changed the spec with one week to go during finals week. (This last part appears to have planned; they did it every year.)
This was a surprisingly effective course, if sadistic.
I had something like this too - it was required for my CS degree. Our class split up into teams of 5. But the whole class of 30 was working on a single project. It was a semester-long project and each team also had to integrate with eachother to build the final solution.
Here's the thing, though: Of a CS graduating class, 90% of them will work as software engineers, not as computer scientists. (All numbers made up, but I think they're about right.)
We don't need these things to be electives. We don't need them to be a master's program. We need an undergraduate software engineering program, and we need 90% of the people in CS to switch to that program instead.
I agree with you! It's hard to change curricula because there are so many competing interests. CS is an evolving field and things like machine learning have burst onto the stage, clamoring for attention. There is also an age-old debate about whether CS departments are trade schools, math departments, or science. Personally I think software engineering skills are paramount for 90% of graduates. How do we fit this into a full curriculum? What gets the axe? Unclear.
> There is also an age-old debate about whether CS departments are trade schools, math departments, or science. Personally I think software engineering skills are paramount for 90% of graduates.
The question as well is: are Chemical Engineering, Mechanical Engineering, Materials Engineering trade schools?
I think it's a key call out as CS touches on so many things.
There are arguments for it being math, science, engineering, a trade school or a combination of the above.
And then if you separate them out completely you end up with people doing CS being out of touch with what happens in the real world and vice versa.
I think in the end you probably need to have a single overall degree with a specialization in (Programming as Engineering, Programming as Math and Programming as Computer Science,) with lots of overlap in the core.
And then you still can have a both a bootcamp style trade school.
Now all of that said, that still doesn't solve the CS equivalent of "Math for business majors". Or the equivalent of "Programming for Scientists", or the like which is already a really important case to offer. Where you major in Bio/Chem/Other but being able to apply programming to your day job is important.
Although that probably sits closer to the Applied Software category that you might find in business school like using spreadsheets, basic command lines, intro to databases or python.
But to your point, how rarely software engineering is being taught is a huge problem. Even if only 30% of degree holders took classes in it, it would be huge in helping spread best practices.
I don't really think that software engineering is a trade per se. I think it is a much more creative activity that requires a lot more baseline knowledge. I think an automotive engineer is to a mechanic as a software engineer is to an IT administrator. There is still a fair amount of creativity and knowledge required for being a mechanic or IT admin, but I don't think it's nearly the same amount.
Software engineering is interesting, though, because it does not require as much formal education as many other engineering fields to get a job. I think this is in part because it is very economically valuable (in the US at least) and because the only tool you need is a computer with an Internet connection.
With all of that said, I think SWE is probably closer to a trade than other engineering disciplines, but not by all that much.
Exactly what I see. I know a few cases of CTOs influencing the curriculum of CS courses, pushing it to be more “market ready”. Which was effectively turning it into a 4 year bootcamp.
Which is a big shame, because it’s very common to see developers who would have benefited from a proper CS curriculum, and have to learn it later in life.
For years a friend tried to get his department (where he worked as a lecturer) to add a software engineering course where the basic syllabus was to (1) receive a TOI from the last semester and the code base, (2) implement some new features in the code base, (3) deploy and operate the code for awhile, and (4) produce a TOI for the next semester.
The code base was for a simple service that basically provided virus/malware scanning and included the malware scanner and signatures (this ensured there would never be an end to the work - there's always more signatures to add, more features, etc.)
I thought this was a fantastic idea and its a pity he never convinced them. That was more than fifteen years ago, and in his plan it would have just run forever.
In my university, (US, state school,) we had a software engineering course exactly like this. It was great in theory, but in practice, the experience was rushed, the codebase was poor quality, (layers upon layers of nothing features with varying code quality,) and the background knowledge was completely ignored. The application we had to work on was a Tomcat Java Web application with an Angular frontend, when neither of those technologies were taught in any other classes (including electives.)
This approach to education can work, but I think simulating/mocking portions of this would have been more helpful (it could've been a teacher/TA managed codebase we started with rather than the monstrosity passed between generations of students who were inexperienced.)
You needed to partner with the business school to get a future MBA to convince the faculty (executives) the biggest return (profitability) was a total re-write!
I had a single phone interview with someone at Northwestern (a long time ago) where they were looking for someone to build a pen of developers to "partner" with MBA students to turn ideas into apps. I laughed so hard my sides hurt.
I think like your example where things go wrong is the most realistic exposure to programming you can give someone.
Learning why things are bad, and why it's bad to experience them offers a new level of appreciation or better ways to argue why certain things should be done.
The thing is for academics quality software sometimes isn't actually quality software.
My experience has been that people who's first jobs were in companies with quality software or who's job included reading through other people's quality software learn to write good software, the other ones learn whatever they saw in the environments they worked in.
That sounds like an excellent way to do practice that directly mirrors real-world SWE, while still cutting it down to an appropriate size for a pedagogical environment. What a good idea.
I went to a STEM school and exactly 0 of the professors had been in industry ever or at least in the last 30 years. The only guy with some experience was an underpaid lecturer. He was also the only good lecturer.
A lot of professors just want to do research and mentor students onto the PHD track to self replicate. My mandated faculty advisor was basically like "go to the career center" when I asked about, you know, getting a job of some sort with my degree.
So yes, it is a real problem. CMU may stand out by actually having courses in the space, but it is not the norm by any means.
If my friends hadn’t had such vividly bad experiences with the compiler class, I might not have taken the distributed computing class that was one of the other options to fulfill that category.
It’s not the most defining class of my undergrad years, but it was pretty damned close.
The fact that most people designing systems don’t know this material inspires a mix of anger and existential dread.
After seeing the same mistakes made over and over again I can't help but agree. This is how one builds enterprise software now, and it is poorly understood by most developers, although that is starting to change. If I were designing a college curriculum, required courses would include all of the normal CS stuff but also software engineering, distributed computing, design patterns, web application fundamentals, compiler design, databases, DevOps/Cloud, testing fundamentals, UX design, security basics, data science, IT project/process management, essentials of SOA, product mgt and requirements. Of course, it's been so long since I went to college, none of these things existed back in the day, so perhaps modern curriculum has all of these now!
I took one of these kinds of classes in my masters program this year. They were totally obsessed with UML. It would be nice if these classes could move beyond dogma that is decades old.
What would be better? Change tools every 3-5 years like the industry does, so by the time any given instructor actually has a grasp on a particular tool or paradigm, its already obsolete (or at least fallen out of fashion) too?
I'm no fan of UML, but the exercise is to teach students how to plan, how to express that plan, and how to reason about other people's plans. The students will certainly draw a lot of flow diagrams in their careers, and will almost certainly deal with fussy micromanagers who demand their flow diagrams adhere to some arbitrary schema that has only limited impact on the actual quality of their work or documentation.
I haven't seen a UML diagram once in 7 years of working. The approach presented in the book "a philosophy of Software Design" is much better than the outdated bullshit from the 90s.
I never got the hate against UML. To me, UML is just a language to communicate about several aspects of a technical design, and to visualize a technical design to make it easier to reason about it.
I did not read the book "a philosophy of software design", but I just scanned the table of contents, and it is not clear to me how "a philosophy of software design" contradicts with using UML.
Are you telling me that in those 7 years of working, you never once used a class diagram? Or a database diagram? Or an activity diagram, deployment diagram, sequence diagram, state diagram?
I find that hard to believe... how do you explain your design to other people in the team? Or do you mean that you do make that kind of diagrams, but just use your own conventions instead of following the UML standard?
Personally, I often use UML diagrams when I am working on a technical design and I use those diagrams to discuss the design with the team. On almost every project I create a class diagram for all entities in the system. I rarely make a database diagram because that often is a direct translation of the class diagram. For some of the more complex flows in the system, I create an activity diagram. For stuff that goes through state transitions, I create state diagrams. In my case, this really helps me to reason about the design and improve the design. And all those diagrams are also very useful to explain the whole software system to people who are new on the project.
That does not mean that my diagrams are always strictly UML-compliant or that I make a diagram for every little piece in the software system. The goal is not to have UML-compliant diagrams, but to efficiently communicate the technical design to other people, and UML is nice for this because it is a standard.
UML is a tool to do something (namely, formal and detailed specification of systems) that in many places nowadays isn't really done. I think it is totally plausible that over 7 years of professional work OP has never been in a situation where one person has made a detailed design and wants to present it in a formal manner to other people in the team using diagrams (as opposed to answering specific questions of "what does this particular thing in the code do"). If they discuss the process, they tell about the process without using an activity diagram. If someone wants to view a database diagram, they use some tool that autogenerates one from the data, and discards it after viewing instead of attempting to maintain it as a formal documentation.
I agree that all those diagrams are also very useful to explain the whole software system to people who are new on the project, however, that doesn't imply that having this ability is common, many (IMHO most, but I don't have the data) companies intentionally don't put in the time and effort to maintain such up to date diagrams for their systems.
> Change tools every 3-5 years like the industry does, so by the time any given instructor actually has a grasp on a particular tool or paradigm, its already obsolete (or at least fallen out of fashion) too?
I mean, yeah. Seeing that wave happen over the course of their college career would probably be better prep for a career than most CS classes.
Great! So, do you bring your whiteboard to lecture to turn in, or take a picture of it, or just schedule time with your professor to whiteboard in front of them?
All I'm arguing for here is that UML serves the same purpose as those online homework apps. Correctly formatting your calculus homework to be accepted by that interface is as unrelated to calculus as UML mastery is to effective software design, but it resolves some of the same logistical challenges, while introducing others.
A whiteboard is just a medium to draw.
Uml is a standard that says how to express certain ideas as symbols that can be drawn.
It's not clear to me what your argument is.
Is it using whiteboards to draw uml instead of special uml software?
If so, be prepared to take much longer to draw the diagram.
Or do you mean uml is deficient compared to free drawing of symbols on a whiteboard without a standard?
If so, be prepared that nobody will completely understand your diagram without explanation
CMU constantly reevaluates its MSE program with input from many different angles. I've participated here and I think we're trying hard to balance important foundational knowledge with practical skills of the day. I don't think we over-emphasize UML or any one particular silver bullet in our program.
To a first approximation, software developers don't have masters degrees. If you are thinking about changing how an industry does its work, focusing on graduate courses seems counterproductive.
I disagree. I have a Master's in Software Engineering and the way to change things is for those with the formal education to try and spread good practices as much as possible in the workplace. Sometimes the main benefit is just knowing that good practices exist so you can seek them out.
The biggest impact I've had at the places I've worked have been about procedures and methodology, not how to use UML or draw a dataflow diagram.
- Have a process around software releases. Doesn't matter what as much as it has to be repeatable.
- Review your designs, review your code, don't do things in isolation.
- Have a known location for documents and project information.
- Be consistent, don't do every project completely differently.
- Get data before you try to fix the problem.
- Learn from your mistakes and adjust your processes to that learning.
- And many more things that sound like common sense (and they are) but you'd be amazed at how even in 2023 many companies are developing software in complete chaos, with no discernible process.
What I'm saying is that if your goal is to introduce more engineering rigor and your plan is for for the tiny percentage of graduate school graduates to percolate these ideas through the industry, it's probably a bad plan and likely to fail.
This was a thread about why software developers don't do engineering like other disciplines. One partial answer is that those other disciplines take it much more seriously at the undergraduate level, at least on average.
Probably the more compelling answer is that the industry doesn't' really want them to for the most part.
While I'm not suggesting that UML is necessarily the solution, I hope it's not, but the observation that so few developers touch anything that just looks like UML is a good indication that a lot of software is in fact NOT designed, it's just sort of hobbled together from a few sketches on a whiteboard.
I hope that people say they hate UML and then just make UML (class, database, activity, ..) diagrams according to their own conventions, but I am afraid you are right and that a lot of software is just "hobbled together"...
But debugging is about "trying out random things". You can call it a Monte-Carlo tree search is you want to sound smart.
And I don't feel is not something that is worth teaching in universities, because it is 90% experience and for me, the point of universities is not to replace experience, just give enough to students so that they are not completely clueless for their first job, the rest will come naturally.
What universities can teach you are the tools that you can use for debugging: debuggers, logging, static and dynamic analyzers, etc..., different classes of bugs: memory errors, the heap, the stack, injection, race conditions, etc..., and testing: branch and line coverage, mocking, fuzzing, etc... How to apply the techniques is down to experience.
In fact, that's what I find most disappointing is not juniors programmers struggling with debugging, this is normal, you need experience to debug efficiently and juniors don't have enough yet. The problem is when seniors are missing entire classes of tools and techniques, as in, they don't even know they exist.
The "Monte-Carlo tree search" space is usually far too large for this to work well!
It is true that initially you may not know where the bug is but you have to collect more evidence if possible, see if you can always cause the bug to manifest itself by some tests, explore it further, the goal being to form a hypothesis as to what the cause may be. Then you test the hypothesis, and if the test fails then you form another hypothesis. If the test succeeds, you refine the hypothesis until you find what is going wrong.
Such hypotheses are not formed randomly. You learn more about what may be the problem by varying external conditions or reading the code or single stepping or setting breakpoints and examining program state, by adding printfs etc. You can also use any help the compiler gives you, or use techniques like binary search through commits to narrow down the amount of code you have to explore. The goal is to form a mental model of the program fragment around where the code might be so that you can reason about how things are going wrong.
Another thing to note is you make the smallest possible change to test a hypothesis, at least if the bug is timing or a concurrency related. Some changes may change timing sufficiently that the bug hides. If the symptom disappears, it doesn't mean you solved the problem -- you must understand why and understand if the symptom disappeared or the bug get fixed. In one case as I fixed secondary bugs, the system stayed up longer and longer. But these are like accessories to the real murderer. You have to stay on the trail until you nail the real killer!
Another way of looking at this: a crime has been committed and since you don't know who the culprit is or where you may find evidence, you disturb the crime scene as little as possible, and restore things in case you have to move something.
But this is not usually what happens. People change things around without clear thinking -- change some code just because they don't like it or they think it can be improved or simplified -- and the symptom disappears and they declare success. Or they form a hypothesis, assume it is right and proceed to make the "fix" and if that doesn't work, they make another similar leap of logic. Or they fix a secondary issue, not the root cause so that the same bug will manifest again in a different place.
I suspect that GP was talking about some notetaking tactics to systematically narrow things down while throwing educated guesses against the wall. Because so much of debugging is running in circles and trying the same thing again. No amount of notetaking can completely remove that, because mistakes in the observation are just as much an error candidate as the code you observe, I'm convinced that some "almost formalization" routine could help a lot.
Good points on the tool side. While "debugger driven development" is rightfully considered an anti-pattern, the tool-shaming that sometimes emerges from that consideration is a huge mistake.
I worked with programmers around my junior year and some of them were in classes I was in. I thought they were all playing one-upsmanship when I heard how little time they were spending on homework. 90 minutes, sometimes an hour.
I was a lot faster than my roommate, and after I turned in my homework I’d help him debug (not solve) his. Then I was helping other people. They really did not get debugging. Definitely felt like a missing class. But it helped me out with mentoring later on. When giving people the answer can get you expelled, you have to get pretty good at asking leading questions.
Then I got a real job, and within a semester I was down below 2 hours. We just needed more practice, and lots of it.
This is why internships and real world experience is so important. A course is 3 in class hours a week over 12-14 weeks typically. After homework and assignments it is ultimately maybe 40-80 hours of content.
Which means you learn more in one month of being on a normal, 40 hour workweek job than you have in an entire semester of one course.
Not all hours are created equal. This is on the verge of saying “I took 1,000 breaths on my run, so if I do that again, it’s like going for a run.” Just because you’re measuring something, it doesn’t mean that you’re measuring the right thing. You’re just cargo-culting the “formal education is useless” meme.
Were you the sort of person who responsibly worked a little bit on the assignments over the course of the week/two weeks, or did you carve out an evening to try to get the whole thing done in one or two sittings?
My group did the latter. I think based on what we know now about interruptions, we were likely getting more done per minute than the responsible kids.
Including reading, we might have been doing 15 hours a week sustained, across 2-3 core classes.
But these were the sort of people who got their homework done so they could go back to the ACM office to work on their computer game, or work out how to squeeze a program we all wanted to use into our meager disk space quota.
Anything more than a B was chasing academia over practical knowledge. B- to C+ was optimal.
I believe that software-related college degrees are mainly there to get the horrible first few tens of thousands of lines of code out of people before they go into industry.
What do you mean by people trying random things? I think that approach (if I understand the term correctly) is more or less what debugging is as a form of scientific investigation.
If you observe a car mechanic trying to find the problem with a car, he would go like: "is this pin faulty? No. Is the combustion engine faulty? No. Are the pedals faulty? Yes." where the mechanic starts with some assumptions and disproves them by testing each of those assumptions until (hopefully) the mechanic finds the cause of the fault and is able to fix it. Similar types of investigations are important to how natural science is done.
So it would be helpful if you can clarify your intended meaning a bit more. Maybe I or someone else would learn from it.
Trying random things seems to be how a large number of professional software engineers do their jobs. Stack Overflow and now CodeGPT seem to contribute to this.
I'm not sure if software engineering classes in particular do, but at my university, they teach C++ in the second required course, and they teach you about using GDB and Valgrind on Linux there. They don't explicitly teach you about systematically debugging, though, beyond knowing how to use those two programs.
Rose Hulman is an undergraduate teaching school that also has a distinction between a Software Engineering Degree and a Computer Science Degree. The Software Engineering degree takes a few less math courses and instead takes classes on QA/Testing, Project Management, and Formal Methods
I chose software engineering. 3 years into the program the head of the department made a speech at an event to the effect of "Software hasn't changed in the last 10 years". It instantly devalued the entire program for me.
I have news for you... He's not wrong. The porcelain is different, but the same methodologies and processes are in place. The biggest change recently is distributed (mostly ignored) version control, that's 20 years old, and continuous integration/development (probably also around 20 years old, but only catching on in the last 10-15 years).
Computer science has changed more, there are lots of developments in the last 5-10 years.
The biggest change I’ve seen in 20 years is that things like DVCS and actual good tool chains for languages people actually use are available and in use.
If you switch to a different framework that does the same things slightly differently and makes something more convenient, and do that three times over the years, that's still perfectly consistent with "Software hasn't changed in the last 10 years" - it's simply not a meaningful change, nor would be switching to a different programming language.
I know where you come from and I know where the people who are responding to you come from too.
Software has changed in the last 10 years, but a lot of it has changed superficially. A green software engineer most likely won't be able to tell the difference between a superficial change and a fundamental change.
It has a lot to do with the topic of this thread. "Quality Software" It's a loaded term. There's no formal definition, everyone has their own opinion on it and even then these people with "opinions" can't even directly pinpoint what it is. So the whole industry just builds abstraction after abstraction without knowing whether the current abstraction is actually close to "quality" then the previous abstraction. It all starts out with someone feeling annoyed, then they decide to make a new thingie or library and then they find out that this new thing has new annoyances and the whole thing moves in a great flat circle.
That's the story of the entire industry just endless horizontal progress without ever knowing if we're getting better. A lot of the times we've gotten worse.
That being said there have been fundamental changes. Machine learning. This change is fundamental. But most people aren't referring to that here.
> "Software hasn't changed in the last 10 years". It instantly devalued the entire program for me.
As opposed to maths, physics, philosophy, civil engineering, classical studies which have gone through complete revolutions in their topics, problems and study methods in the last 10 years?
It seems to vary a lot from school to school. At my university (2010-2014) we were writing unit tests from the first CS class. I can't say we got much instruction on how to structure more complex applications in order to make them testable, however. Things like dependency injection and non-trivial mocking have to be learned on the job, which is a real shame. Even within the industry skills and approaches to designing code for tests feel very heterogeneous.
Not the person you replied to, but I am aware that a functional programming course by CMU has had lectures (not exercises or content) freely available as YouTube recordings. https://brandonspark.github.io/150/
So I have not done research any software engineering field and have not read all that much either. One example that comes to mind from one of my courses that I took in software engineering is research around mutation-based testing. That form of testing is where you generate random variants of your test by doing things like deleting a statement, adding a statement, changing a less than sign to a greater than sign, etc. Then you check to see that at least one of your tests fails for each variant. If it does not, you either add a test or mark that variant as being effectively the same program. I forget what the term is for it. At any rate, I think there is still research being done on this topic, for example how to effectively generate programs that do not generate as many functionally identical programs. Software testing in general is a big part of software engineering, and I think there is still a fair amount of research that could be done about it.
In my opinion, the intersection of cognitive psychology and software engineering is also ripe for a lot of research. I feel like we as software engineers have a lot of intuitions about how to be productive, but I would love to see various things that could indicate productivity measured in a controlled lab setting.
No, mutation-based testing is different. Fuzzing varies the input to the program. Mutation testing varies the program itself as a means of testing the quality of the tests.
Testing, fuzzing, and verification are all pretty hot SE topics now, at least in my department at CMU. All of those have some element of program analysis, both static and dynamic, so there's potentially deep PL stuff, which is core CS. There's aspects to SE that are not just that, of course, which are studying social processes, management styles and practices, software architectures, design patterns, API design, etc.
i feel like the Toyota Production System could be applied, I know they used some of it at Amazon but in general I don't hear much about it in 'software spaces'. It's been studied a huge amount in quality theory but... seems like theres not a lot of crossover between manufacturing and software.
like the idea of a Poke Yoke could cover all manner of automated analysis software tools. from the -Wall in gcc to, Rust borrow checker, basically just Poke Yoke.
The reason I find it easier to work with people who have a degree in Computer Science is that I don't have to convince them of the need for good algorithms and not to try to implement parsers or cryptography by hand.
When it comes to software engineering I feel there is no qualification where you can feel that the gross naivety about quality and working in teams (and with other teams) has been similarly ironed out.
Instead you get people hardened in sin from their repeated experience of writing one horrible bit of untested code quickly and then leaving before the shit truly hit the fan :-)
One's management is very impressed with those kind of people and very contemptuous of those left behind who have to painstakingly clean up the mess.
Hum, interesting perspective. I did 95% of a masters in CS before leaving to do a startup and while I can see the value of parser generators, there are a LOT of times when it is appropriate, useful, and more performant to write your own simple parser. It's often in my opinion the right thing to to first for simple cases. Clearly you should test it and consider functional requirements, but a stringy protocol with clear delimiters is usually dead simple to parse and depending on your use case you can focus on a subset. If you're writing a language... my advice might be different.
I've never had it once in my career where using a parser generator wasn't better. Given that it's an in-language parser generator and not some meta-language monstrum like ANTLR.
Maybe when writing your own programming language, own complicated data format or low communication protocoll while requiring extreme performance. But that seems to be super rare, at least in my area of profession.
I’ve had a very different experience. I can think of three occasions where I was able compare a hand written parser with something built using a parser generator or similar tool. In two of those cases, the hand written code was far easier to read and modify. This kind of code can also be easier to test.
Parser generators aren’t a free lunch and they vary considerably in quality.
Maybe we talk about two different things. I'm talking about libraries that help you write a parser. Those are not tools, there is no extra build-process involved or anything like that.
My impression is the gist is: When you think like an engineer, your focus is on problem solving, and using the appropriate tool(s) to do that. On the other hand, typical developers instinct is to code, code, code; at least based on my experience.
But I also wasn't focused on parser tools. My observation was more universal. That is, engineers look before they leap. Developers leap first and ask questions later. Engineers are intentional. Developers much less so, and far more reactive.
Yeah.. I didn't get rude. Sometimes coders just have the NotBuiltHere attitude. I think it's something you grow out of.
We can build something, or pull something off the shelf. If it takes x time to build it, and then x time to debug and test and x^(1/0) to maintain it; far better to just add a gem. Even if it's not the absolute best, at least it's easy to understand and if it becomes a problem fix the edges.
I've worked with people who thought parsers were straighforward and trying to fix the bugs in their code was fraught with impossibility - there can sometimes be millions of ways for parsers to accept invalid input or not accept valid input.
In one case I gave up on fixing the code where every change introduced new possible bugs and used a parser generator. We never had another bug in that part of the code but my wholesale change caused intense friction.
I feel that a course in parsers would have helped that person to understand this wasn't an appropriate situation.
In fact I think it's a good idea to have BNF "that works" before you hand code anything just to confirm that you understand your own language design.
Cryptography yes, but are you sure about parsers? As far as I can tell, there's some kind of U-curve there. Beginners code them by hand, intermediate-level programmers and intermediate-scope projects use parser generators, and people maintaining the most sophisticated parsers prefer to code them by hand too. For example, GCC used to have a bison parser, but they switched to a hand-coded recursive descent one because that let them produce more helpful error messages. Clang uses recursive descent too.
To be fair though, jetbrains use case is fairly unique, as they basically want to implement parsing for as many languages as possible, all while doing it in a verry structured and consistent way, with having many other parts of their infrastructure being dependent on that parsing API.
I think it's fair to say that those requirements are outside of the norm
I think that's a fine observation, but I'll also add that since their cases are almost always consumed in an editor context, they need them to be performant as well as have strong support for error recovery, since (in my mental model) the editor spends 90% of its time in a bad state. If I understand tree-sitter correctly, those are some of its goals, too, for the same reason
Pushback on parsers. It's very difficult to provide useful diagnostic error messages with yacc/bison. So most languages end up with a hand-written recursive descent parser.
The only exception I personally know of is jq (uses bison). So it's difficult to produce helpful syntax error messages in the implementation of jq.
> The reason I find it easier to work with people who have a degree in Computer Science is that I don't have to convince them of the need for good algorithms and not to try to implement parsers or cryptography by hand.
Cryptography and parsers simply do not belong in the same sentence. There is never a time when it is a appropriate to write your own cryptography. OTOH, most large compiler and interpreter projects have handwritten parsers, and many of them have handwritten lexers too.
Writing a parser can be simple enough to fit into a take-home assignment, and hand-written parser code ends up looking pretty similar to an LL grammar anyway. Parsing is also the easiest part of writing compiler or language tooling, so if a hand-written parser is too high a bar for the team then the entire project might questionable.
I'm not saying never use a parser generator, but I would personally prefer to work on a project with a well tested hand-written parser than a project using a parser generator. Especially if it complicates the build process with extra tooling, or is something really dated like Bison or ANTLR.
It is a culture thing. Try to avoid cowboy shops. The thing is the general standard seems higher than 20 years ago. Source control, unit testing and CI/CD are not controversial any more for example.
Yep. If you have written production grade software at real companies, you know that the moment you make that new commit (even if 1 liner change), you are now ready to accept that it could break something. yes you can do your unit tests, integration test, User Acceptance Tests and what not. But every code change = new possible bug that you may not be able to catch until it occurs to a customer.
Whenever I hear a developer say "I never ship buggy code", I am always cautious to dig in more and understand what they mean by that.
It's always amazing when I get a bug report from a product that's been running bug free in production for years with minimal changes but some user did some combination of things that had never been done and it blows up.
Usually it's something extremely simple to fix too.
This happens a lot more than one may think especially with products that have lot of features. Some features are used sparingly and the moment a customer uses that feature a bit more in depth, boom. Something is broken.
> especially with products that have lot of features
No kidding. I'm 2 or 3 years into working on a SaaS app started in ~2013 and I still get bug reports from users that make me say "what!? we have that feature!?"
I never really got how proofs are supposed to solve this issue. I think that would just move the bugs from the code into the proof definition. Your code may do what the proof says, but how do you know what the proof says is what you actually want to happen?
A formal spec isn't just ordinary source-code by another name, it's at a quite different level of abstraction, and (hopefully) it will be proven that its invariants always hold. (This is a separate step from proving that the model corresponds to the ultimate deliverable of the formal development process, be that source-code or binary.)
Bugs in the formal spec aren't impossible, but use of formal methods doesn't prevent you from doing acceptance testing as well. In practice, there's a whole methodology at work, not just blind trust in the formal spec.
Software developed using formal methods is generally assured to be free of runtime errors at the level of the target language (divide-by-zero, dereferencing NULL, out-of-bounds array access, etc). This is a pretty significant advantage, and applies even if there's a bug in the spec.
> A formal spec isn't just ordinary source-code by another name, it's at a quite different level of abstraction
This is the fallacy people have when thinking they can "prove" anything useful with formal systems. Code is _already_ a kind formal specification of program behavior. For example `printf("Hello world");` is a specification of a program that prints hello world. And we already have an abundance of tooling for applying all kind of abstractions imaginable to code. Any success at "proving" correctness using formal methods can probably be transformed into a way to write programs that ensure correctness. For example, Rust has pretty much done so for a large class of bugs prevalent in C/C++.
The mathematician's wet dream of applying "mathematical proof" on computer code will not work. That said, the approach of inventing better abstractions and making it hard if not impossible for the programmer to write the wrong thing (as in Rust) is likely the way forward. I'd argue the Rust approach is in a very real way equivalent to a formal specification of program behavior that ensures the program does not have the various bugs that plagues C/C++.
Of course, as long as the programming language is Turing Complete you can't make it impossible for the programmer to mistakenly write something they didn't intend. No amount of formalism can prevent a programmer from writing `printf("hello word")` when they intended "hello world". Computers _already_ "do what I say", and "do what I mean" is impossible unless people invent a way for minds to telepathically transmit their intentions (by this point you'd have to wonder whether the intention is the conscious one or the subconscious ones).
> thinking they can "prove" anything useful with formal systems
As I already said in my reply to xmprt, formal methods have been used successfully in developing life-critical code, although it remains a tiny niche. (It's a lot of work, so it's only worth it for that kind of code.) Google should turn up some examples.
> Code is _already_ a kind formal specification of program behavior.
Not really. Few languages even have an unambiguous language-definition spec. The behaviour of C code may vary between different standards-compliant compilers/platforms, for example.
The SPARK Ada language, on the other hand, is unambiguous and is amenable to formal reasoning. That's by careful design, and it's pretty unique. It's also an extremely minimal language.
> `printf("Hello world");` is a specification of a program that prints hello world
There's more to the story even here. Reasoning precisely about printf isn't as trivial as it appears. It will attempt to print Hello world in a character-encoding determined by the compiler/platform, not by the C standard. It will fail if the stdout pipe is closed or if it runs into other trouble. Even a printf call has plenty of complexity we tend to just ignore in day to day programming, see https://www.gnu.org/ghm/2011/paris/slides/jim-meyering-goodb...
> Any success at "proving" correctness using formal methods can probably be transformed into a way to write programs that ensure correctness
You've roughly described SPARK Ada's higher 'assurance levels', where each function and procedure has not only an ordinary body, written in SPARK Ada, but also a formal specification.
SPARK is pretty challenging to use, and there can be practical limitations on what properties can be proved with today's provers, but still, it is already a reality.
> Rust has pretty much done so for a large class of bugs prevalent in C/C++
Most modern languages improve upon the appalling lack of safety in C and C++. You're right that Rust (in particular the Safe Rust subset) does a much better job than most, and is showing a lot of success in its safety features. Programs written in Safe Rust don't have memory safety bugs, which is a tremendous improvement on C and C++, and it manages this without a garbage collector. Rust doesn't really lend itself to formal reasoning though, it doesn't even have a proper language spec.
> The mathematician's wet dream of applying "mathematical proof" on computer code will not work
Again, formal methods aren't hypothetical.
> I'd argue the Rust approach is in a very real way equivalent to a formal specification of program behavior that ensures the program does not have the various bugs that plagues C/C++.
It is not. Safe languages offer rock-solid guarantees that certain kinds of bugs can't occur, yes, and that's very powerful, but is not equivalent to full formal verification.
It's great to eliminate whole classes of bugs relating to initialization, concurrency, types, and object lifetime. That doesn't verify the specific behaviour of the program, though.
> No amount of formalism can prevent a programmer from writing `printf("hello word")` when they intended "hello world"
That comes down to the question of how do you get the model right? See the first PDF I linked above. The software development process won't blindly trust the model. Bugs in the model are possible but it seems like in practice it's uncommon for them to go unnoticed for long, and they are not a showstopper for using formal methods to develop ultra-low-defect software in practice.
> "do what I mean" is impossible unless people invent a way for minds to telepathically transmit their intention
It's not clear what your point is here. No software development methodology can operate without a team that understands the requirements, and has the necessary contact with the requirements-setting customer, and domain experts, etc.
I suggest taking a look at both the PDFs I linked above, by way of an introduction to what formal methods are and how they can be used. (The Formal methods article on Wikipedia is regrettably rather dry.)
I think the reason that formal proofs haven't really caught on is because it's just adding more complexity and stuff to maintain. The list of things that need to be maintained just keeps growing: code, tests, deployment tooling, configs, environments, etc. And now add a formal proof onto that. If the user changes their requirements then the proof needs to change. A lot of code changes will probably necessitate a proof change as well. And it doesn't even eliminate bugs because the formal proof could include a bug too. I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread but it seems like a lot of those checks are already integrated in build tooling in one way or another.
Yes, with the current state of the art, adopting formal methods means adopting a radically different approach to software development. For 'rapid application development' work, it isn't going to be a good choice. It's only a real consideration if you're serious about developing ultra-low-defect software (to use a term from the AdaCore folks).
> it doesn't even eliminate bugs because the formal proof could include a bug too
This is rather dismissive. Formal methods have been successfully used in various life-critical software systems, such as medical equipment and avionics.
As I said above, formal methods can eliminate all 'runtime errors' (like out-of-bounds array access), and there's a lot of power in formally guaranteeing that the model's invariants are never broken.
> I suppose it could help in trivial cases like sanity checking that a value isn't null or that a lock is only held by a single thread
No, this doesn't accurately reflect how formal methods work. I suggest taking a look at the PDFs I linked above. For one thing, formal modelling is not done using a programming language.
You mix up development problem with computational problem.
If you can't use formal proof just because the user can't be arsed to wait where it is supposed to be necessary, then the software project conception is simply not well designed.
As always, the branding of formal methods sucks. As other commentators point out, it isn't technically possible to provide a formal proof that software is correct. And that is fine, because formal software methods don't do that.
But right from the outset the approach is doomed to fail because its proponents write like they don't know what they are talking about and think they can write bug-free software.
It really should be "write software with a formal spec". Once people start talking about "proof" in practice it sounds dishonest. It isn't possible to prove software and the focus really needs to be on the spec.
> It really should be "write software with a formal spec".
The code is already a formal spec.
Unless there are bugs in the language/compiler/interpreter, what the code is essentially formally well defined.
As programming languages get better at enabling programmers to communicate intention as opposed to being a way to generate computer instructions, there's really no need for a separate "spec". Any so called "spec" that is not a programming language is likely not "formal" in the sense that the behavior is unambiguously well defined.
Of course, you might be able to write the "spec" using a formal language that cannot be transformed into machine code, but assuming that the "spec" is actually well defined, then it's just that "compiling" the spec into machine code is too expensive in some way (eg. nobody has written a compiler, it's too computationally hard to deduce the actual intention even though it's well defined, etc.). But in essence it is still a "programming language", just one without a compiler/interpreter.
You can formally prove that it doesn't have certain kinds of bugs. And that's good! But it also is an enormous amount of work. And so, even for life-critical software, the vast majority is not formally proven, because we want more software than we can afford to formally prove.
This is an interesting point that I think a lot of programming can miss.
Proving that the program has no bugs is akin to proving that the program won't make you feel sad. Like ... I'm not sure we have the math.
One of the more important jobs of the software engineer is to look deep into your customer's dreams and determine how those dreams will ultimately make your customer sad unless there's some sort of intervention before you finish the implementation.
Exactly, it's fundamentally impossible. Formal proofs can help with parts of the process, but it can guarantee no bugs in the product. These are the steps of software, and their transitions. It's fundamentally a game of telephone with errors at each step along the way.
What actually would solve the customer's problem -> What the customer thinks they want -> What they communicate that they want -> What the requirements collector hears -> What the requirements collector documents -> How the implementor interprets the requirements -> What the implementor designs/plans -> What the implementor implements.
Formal proofs can help with the last 3 steps. But again that's assuming the implementor can formalize every requirement they interpreted. And that's impossible as well, there will always be implicit assumptions about the running environment, performance, scale, the behavior of dependent processes/APIs.
It helps with a small set of possible problems. If those problems are mission-critical then absolutely tackle them, but there will never be a situation where it can help with the first 5 steps of the problem, or with the implicit items in the 6th step above.
Even formally proved code can have bugs. If your requirement is wrong is the obvious thing. I don't work with formal proofs (I want to, I just don't know how), but I'm given to understand they have other real world limits that make them sometimes have other bugs.
Or perhaps you'll prove it from first principles. Although if turns out to be difficult, that's okay. Somebody mentioned something about systems being either complete or consistent but never both. Some things can be true but not proveably so. Can't quite remember who it was though.
Gödel's really was a rather unique mind, and the story of his death is kind of sad.. but I wonder if it takes such a severe kind of paranoia to look for how math can break itself, especially during that time when all the greatest mathematicians were in pursuit of formalizing a complete and consistent mathematics.
No. It merely prevents you from confirming every arbitrarily complex proof. Incompleteness is more like: I give you a convoluted mess of spaghetti code and claim it computes prime numbers and I demand you try to prove me wrong.
There are well-formed statements that can be proved but which assert that its godelized value represents a non-provable theorem.
Therefore, you must accept that it and its contradiction are both provable (leading to an inconsistent system), or not accept it and now there are provable theorems that cannot be expressed in the system.
Furthermore, that this can be constructed from anything with base arithmetic and induction over first-order logic (Gödel's original paper included how broadly it could be applied to basically every logical system).
The important thing to note is that it doesn't have anything to do with truth or truth-values of propositions. It breaks the fundamental operation of the provability of a statement.
And, since many proofs are done by assuming a statement's inverse and trying to prove a contradiction, having a known contradiction in the set of provable statements can effectively allow any statement to be proven. Keeping the contradiction is not actually an option.
> If you believe you can ship bug free code, it's time to switch careers.
Unfortunately, you are correct. Shipping in time and bug free are inversely proportional, and in a world were usually it's hard to argue with PMs for more time to have better testing, or paying tech debt... it's just a reality
An infinite amount of time would not necessarily yield zero bugs.
But more importantly, once you've fixed the "show-stopping bugs," putting the software in front of customers is probably the best next step, as even if it's bug-free, that doesn't mean it solves the problem well.
there is no such thing as zero bugs. There is only a marker in time for a suite of tests that show no bugs. Doesn't mean larva aren't living under the wood. You can't control all the bits (unless you built your own hardware/software stack).
I think we're saying the same thing? That was my point. You're never going to achieve zero bugs no matter how much time you give yourself. Focus on getting the critical path right and creating a good experience, and then get it to customers for feedback on where to go next.
[The above does not necessarily apply in highly regulated industries or where lives are on the line]
I like to think of "zero bugs" as the asymptote. As you spend more time, you discover increasingly fewer (and less significant) bugs per unit of time. POSSIBLY at the limit of infinite time you hit 0 bugs, but even if you could, would it be worth it? Doubtful.
I can think of far better ways to spend infinite time.
0 bugs is actually impossible. A cosmic ray can flip a bit and change the behavior of your software. We live in a fundamentally unreliable universe.
We aren't taught how to write reliable software because very few people know how to write reliable software. It doesn't help that academia has a hard crush on OOP, which is a bag of accidental complexity - and complexity is a breeding ground for unreliability.
I think if a cosmic ray flips the bit and changes the behavior of your software, you can still reasonably brag that you wrote 0-bug code. It's not your fault that happened, you didn't do that. The code you wrote had 0 bugs.
I would say that also applies on highly regulated industries or where lives are on the line.
On those you're of course expected to do safety and testing up to the limit of the "value of a statistical life"s within the expected project impacts, but it still has time and budget limits.
The part I was suggesting does not apply is the statement "Focus on getting the critical path right and creating a good experience, and then get it to customers for feedback on where to go next."
Most software engineering is about making sure the happy path works well. When lives are on the line, you need to also plan to minimize the possible damage that can happen when things go wrong.
Yup, I also like how you call out "get it in-front of customers" as a step in the whole chain. Often sorely missed. Sometimes a bug to you, is a feature to them (gasp!)... so either make it a first class thing or train them on the correct path (while you fix the "bug").
Ok, I think we’ve gone too far. There absolutely is such thing as 0 bugs and sometimes code changes don’t have bugs. That is not to say it can be garunteed.
We need to define bug but if bug is anything a customer (internal or external) is not happy with that passes triage and you can’t throw it back in their face. Then zero bugs would be impossible with even infinite time.
That's only true up to a point. There are some quality assurance and control activities that are essentially "free" in that they actually allow for shipping faster by preventing rework. But that requires a high level of team maturity and process discipline, so some teams are simply incapable of doing it. And even in ideal circumstances it's impossible to ship defect free software (plus the endless discussions over whether particular issues are bugs or enhancement requests).
yeah, it's a spectrum. Clearly no one is expecting an app to be truly bug free if the underlying compiler itself has bugs. But how often do users truly run into compiler level bugs?
I think when the author says "bug free", it's from the user perspective. where bugs either need to go out of your way to trigger or are so esoteric it's impossible to think about hitting without that user themself knowing the code inside out. Games is definitely an industry where the quality of code has always dipped to a point where users can easily hit issues in normal use, and only gets worse as games get more complex. That's where it gets truly intolerable
There are tools that help, but you still need time to integrate those tools, learn how to use them, etc. If you are doing unit and integration tests, you need time to not only write those, but also actually plan your tests, and learn how to write tests. Which... needs time
This kind of wisdom only comes from experience I think. Either that or higher order think. Like the article says, most of the time testing/tdd/qa is bolt on after-the-fact. Or a big push at the end with "QA Sprints" (are you sprinting or are you examining? what exactly is a QA sprint? I know what it is).
Once you get beyond "I wrote a function" and "I tested a function" and even still "I tested a function that was called by a function over the wire", you will come to a realization that no matter how edgy your edge cases, no matter how thorough your QA, there will always - ALWAYS be 0-day "undefined behavior" in certain configurations. On certain hardware. On certain kernels. It's an assurance that I guarantee that I'm almost positive it's bug free, since it passed tests, it passed human eyes, and it passed review - fingers crossed.
My wife works as an acoustical consultant at a global construction firm. The things you hear about factories, offices, and even hospitals is wild. Don’t get me wrong the construction world works very hard to avoid issues but I think we in software tend to hold other engineering disciplines up on a pedestal that doesn’t quite match the messiness of reality.
Thanks for saying this. I think we in software engineering tend to think too binary: either the product is perfect (100% bug-free) or it's shit. There's always room for improvement, but compared to other engineering, overall, I think we're doing pretty good. As an example similar to your wife's, my friend used to work for one of the major car manufacturers doing almost the exact same job as Edward Norton's character in Fight Club. The cars had "bugs", they knew about it, but they didn't publicly acknowledge it until they were forced to.
There are a few aspects. One is that we don't understand the fundamentals of software as well as the underpinnings of other engineering disciplines.
More importantly though, for the most part we choose not to do engineering. By which I mean this - we know how to do this better, and we apply those techniques in areas where the consequences of failure are high. Aerospace, medical devices, etc.
It differs a bit industry to industry, but overall the lessons are the same. On the whole it a) looks a lot more like "typical" engineering than most software development and b) it is more expensive and slower.
Overall, we seem to have collectively decided we are fine with flakier software that delivers new and more complex things faster, except where errors tend to kill people or expensive machines without intending to.
The other contributing thing is it's typically vastly cheaper to fix software errors after the fact than, say, bridges.
> One is that we don't understand the fundamentals of software as well as the underpinnings of other engineering disciplines.
That sounds like an awfully bold claim. I have the feeling we understand software a lot better than we understand mechanical engineering (and by extension material sciences) or fluid dynamics. By a big margin.
I worked with finite element software and with CFD solvers, you wouldn't believe how hard it is to simulate a proper airflow over a simple airfoil and get the same results as in the wind tunnel.
To the contrary, it's nearly canonical. Most of the problems pointed out in the 70s (mythical man month) have still not been resolved, 50 years later.
>you wouldn't believe how hard it
Oh, I'd believe it (I've designed and built similar things, and had colleagues in CFD).
But you are definitely cherry picking here. The problem with CFD is we don't understand the fluid dynamics part very well; turbulence is a big unsolved problem still, though we have been generating better techniques. This is so true that in an undergraduate physics degree, there is usually a point where they say something like: "now that you think you know how lots of things work, let's introduce turbulence"
But a lot of mechanical engineering and the underlying physics and materials science is actually pretty well understood, to the degree that we can be much more predictive about the trade offs than typically is possible in software. Same goes for electrical, civil, and chem. Each of them have areas of fuzziness, but also a pretty solid core.
> To the contrary, it's nearly canonical. Most of the problems pointed out in the 70s (mythical man month) have still not been resolved, 50 years later.
Even with all of those applied, we wouldn’t be magically better. Complexity is simply unbounded. It’s almost impossible to reason about parallel code with shared mutable state.
The article is about delivering a complete, working project "on time". I have a neighbor whose home is being renovated and it is already 2x the time the contractor originally quoted.
Of course it is easier for a developer to walk away from something incomplete than an architect and the contractors involved in a physical project, but still, I hardly think that there is really much difference in terms of timelines.
FWIW in my experience delays in e.g. home renos (or for that matter larger scale projects) are mostly for reasons unrelated to the engineering. In software projects, it's probably the #1 reason (i.e. we didn't know how to do it when we started).
Software is still absolutely king for number of large scale projects that just never ship, or ship but never work.
I think your salary observation is more of a firmware vs. hardware, rather then "soft" vs "hard" engineering.
Further to that, it's often informative to figure out what makes a company money. The highest paid software development roles tend to be doing things that are closer to revenue, on average. If you are a software developer at a hardware company (or an insurance company, or whatever), you aren't that close. Even worse if you are viewed as a cost center.
>Further to that, it's often informative to figure out what makes a company money. The highest paid software development roles tend to be doing things that are closer to revenue, on average.
yeah. Who are those trillion dollar businesses and what do they rely on?
- Apple: Probably the better example here since they focus a lot on user-facing value. But I'm sure they have their own deals, B2B market in certain industries, R&D, and ads to take into account
- Microsoft: a dominant software house in nearly every aspect of the industry. But I wager most of their money comes not from users but other businesses. Virtually every other companies uses Windows, Word, and those that don't may still use Azure for servers.
- Alphabet: ads. Need I say more? Users aren't the audience, they are the selling point to other companies.
- Amazon: a big user facing market, but again similar to Microsoft. The real money is b2b servers.
- Nvidia: Again, user facing products but the real selling point is to companies that need their hardware. In this case, a good 80% of general computing manufacturers.
- Meta: Ads ans selling user data once again
- Tesla: CEO politics aside, it's probably the 2nd best example. Split bewteen a user facing product that disrupted an industry and becoming a standard for fuel in the industry they disrupted. There's also some tangential products that shouldn't be underestimated, but overall a lot of value seems to come from serving the user.
General lesson here is that b2b and ads are the real money makers. if you're one level removed that financial value drops immensely (but not necessarily to infeasible levels, far from it).
Trust me when I say this:
even "other" engineering domains have to do patches.
The difference is that software can be used before it is fully ready, and it makes sense to do so. No one can really use a 90% finished power plant, but software at 95% capacity is still usually "good enough"
I think you're 90% there. There is also the cost to apply a patch.
If you want to patch a bridge, it's gonna cost you. Even if you only need to close down a single lane of traffic for a few hours you are looking at massive expenses for traffic control, coordination with transportation agencies, etc.
For most software it's pretty inexpensive to ship updates. If you're a SaaS company regular updates are just part of your business model. So the software is never actually done. We just keep patching and patching.
In some contexts, it is much more expensive to push out updates. For example, in the 00s, I worked on a project that had weather sensors installed in remote locations in various countries and the only way to get new software to them was via dial-up. And we were luck that that was even an option. Making international long distance calls to upload software patches over a 9600 baud connection is expensive. So we tested our code religiously before even considering an update, and we only pushed out the most direly needed patches.
Working on SaaS these days and the approach is "roll forward through bugs". It just makes more economic sense with the cost structures in this business.
Thanks for this insight! It has pretty strong explanatory power. It also explains why rushed development can stall. It explains 'move fast and break things'.
There's even an added factor of learning more about what is really needed by putting a 95% done product into use.
Heck, it explains (stretching it here) space-x's success with an iterative approach to rocket design.
I install high voltage switchgear on site.
A common problem is all the changes that has been added during the design stage, circuits that have been removed or altered, work that has kind of mostly been done to the schemes by the overworked secondary engineer. Sometimes, the schemes have been changed after all the wiring is completed and shipped to site, making it my pain in the ass when it's time to do the commissioning.
The end result is never 100% perfect, but somewhere in between "not too bad" and "good enough".
Team are flying the airplane, the se time rebuild it to the zeppelin, testing new engines inflight.
Or construction. Let's build apartment block, but for few apartments we will test new materials, new layout, etc. Once there are walls of the first apartments we will let people live there. We will build how we can, according to the napkin plan. In the end we will put all tenants in and stress test strength of the structures. Or one day people return home and their apartments have totally different design and layout because someone from the HOA decided so to get a promotion.
>when do not accept the same in other engineering domains?
No, you just complain that your taxes are being used to build expensive roads and bridges. Or you think airplanes are far too expensive. Or that new cars are insanely expensive.
There are cost trade offs. In general, better quality more expense.
Also in software there is not an excessive amount of software engineers in relation to demand for software. So SWEs can get paid a lot to go build crappy software.
Because complexity is boundless and in software it has no cost.
Building a house will have a restrictive initial budget for complexity, you don’t have enough in that budget for rotating floors, or an elevator that is catapulted to the correct floor, etc. These would cost both at engineering time and implementation time a huge amount. Less complexity is easier to analyze.
In case of software, complexity has negligible cost, relative to physical systems. You can increase it ad infinity — but proving it (the whole stack - from the hardware-OS-userspace software) correct is likely impossible with even the whole of mathematics, in certain cases.
In addition to the other answers, there is the perennial and depressing one: Software bugs haven't killed enough people in a suitably visible/dramatic way to be regulated that heavily.
We accept this in all fields of engineering. Everything is "good enough" and the seems to work reasonably well. You should remember this next time you hear about car recalls, maintenance work on bridges, or when some component in your laptop flakes out.
I mean, bridges collapse. That hasn't meant we gave up on engineering bridges.
Point being, we have some risk tolerance, even for civil engineering.
Now we don't accept an engineer saying, "this bridge will probably collapse without warning", which we do accept with software. So there is a difference.
It's perfectly acceptable to let bugs escape into production if those "cost" of fixing that bug higher than the "cost" to the user experience / job to be done. A bug that takes a week to fix that will only be encountered by a small amount of users in a small number of obscure scenarios may not need to be fixed.
I think a common error is taking this view in isolation on each bug.
Fact is, if you ship enough 'low probability' bugs in your product, your probabilities still add up to a point where many customers are going to hit several of them.
I've used plenty of products that suffer from 'death by a thousands cuts'. Are the bugs I hit "ship blockers"? No. Do I hit enough of them that the product sucks and I don't want to use it? Absolutely.
Very much this, and low risk bugs compound at scale.
If you're in a very large FANNG type company, and say you have 1000 components that each ship 1 bug each day that has a 0.1% chance of breaking something critical, that translates to a less than 50% chance you ship a working OS on any given day. And that may mean the entire company's productivity is impacted for the day depending on how broken it is.
Software is commonly built on non-fungible components and monopolies.
Right, you don't want to use Microsoft Word, or SalesForce, or Apple vs Android, or X Whatever. It's highly unlikely you'll have a choice if you use it though.
This presupposes that you know what/where bugs will be found and how they'll impact future users. In my experience knowing either of these is very rare at the point where you're "building quality".
it will be necessary to deliver software without bugs that could have reasonably been avoided in time
ive had this sentiment thrown at me too often by peak move fast and break things types. it's too often a cudgel to dispense with all QA in favor of more new feature development. shipping shit that has the same pattern of flaws youve encountered in the past when youve been shown ways to catch them early but couldnt be bothered isnt accepting that you cant catch everything, it's creating a negative externality.
you usually can make it someone else's problem and abscond with the profits despite, but that doesn't mean you should
I think with formal analysis, whole bug classes can be eliminated. Add to that a rigorous programming style, and 'bug-free' code is within reach.
There will remain bugss that make it through, but they will be rare, and will need a chain of mistakes.
Currently ways of coding to this kind of standard exist. But they are stupid. It involves things like no dynamic memory allocation, only fixed length for-loops, and other very strict rules. These are used in aerospace, where bugs are rather costly and rare.
What seems to yield much better results is to have the program be built by two separate teams to a standard. Then both programs can be run simultaneously, checking each others output — I believe something like this is actually used in the aerospace industry.
It is sad that people on here would believe this and that for whole platforms it is actually true, however, it absolutely is not universally true and the proof is all around us.
Yes, but that software is not bug-free. The claim was not "it's impossible to make software that does not exhibit bugs too a casually noticeable degree".
People who know how the sausage is made will always know of a bunch of bugs that haven't been fixed exactly because they aren't impactful enough to be worth the effort required to fix them.
If it works within specs it is bug free. It doesn’t matter how it is made if it works within specs, which is one of the real unfortunate truths of software.
The other is working out the correct specification is far harder than coding is.
For example it is trivial to write a bug free program that multiplies an integer between 3 and 45 by two.
Most devices work within the spec 99.9% of the time, but that last .1% it is outside the spec. The exact % is different for different projects of course, but the idea is still there: no software operates according to spec 100% of the time.
Sure, but adding two ints is trivial. Hello world probably operates to spec all the time too. Almost all software is vastly more compelx and isn't perfect.
Some people obviously aren't true Scotsman... I'm from the US and have no attachment to Scotland; if I claimed to be a Scotsman and you pointed out that I'm not, and I said "well that's just the no true Scotsman fallacy!", then I would be totally full of it.
In the same way I am not a real Scotsman, your toy example of an easily specified snippet of a function that doesn't do anything useful is not real software.
As you alluded, in practice no specs fully specify a truly bug free implementation. If you want to consider bugs that are within the specification as being bugs in the spec rather than bugs in the implementation, fine, but in my view that is a distinction without a difference.
(Personally, I think code is itself more analogous to the specification artifacts of other professions - eg. blueprints - and the process of creating the machine code of what is analogous to construction / manufacturing something to those specs.)
And even having said that, even the vast majority "bug free" software that nearly always appears to be operating "within spec" will have corner cases that are expressed in very rare situations.
But none of this is an argument for nihilism about quality! It is just not the right expectation going into a career in software that you'll be able to make things that are truly perfect. I have seen many people struggle with that expectation mismatch and get lost down rabbit holes of analysis paralysis and overengineering because of it.
> in practice no specs fully specify a truly bug free implementation.
Except for ones that do, obviously.
The key reason to make the distinction is because the fuzzy business of translating intention into specification needs to be fully accepted as an ongoing negotiation process of defining exactly what the specification is, and integrated into repeated deterministic verification that that is what has been delivered. Failing to do that is mainly a great way for certain software management structures to manipulate people by ensuring everything is negotiable all the time, and has the side effect that no one can even say if something is a bug or not. (And this pattern is very clear in the discussion in this thread - there is a definite unwillingness to define what a bug is).
IME the process of automated fuzzing radically improves all round quality simply because it shakes out so many of the implicit assumptions and forces you specify the exact expected results. The simple truth is most people are too lazy and/or lack the discipline needed to do it.
Those don't exist. There are too many free variables. Some get much closer than others (for instance via formal verification), but all specs are by necessity a model of reality, not the full reality itself.
Nobody actually has any trouble knowing what a bug is. Like, this is just a non-issue, I've never in my career spent a non-negligible amount of time debating with anybody whether something is or isn't a bug. We discuss whether fixing bugs have worthwhile return on investment, and we discuss the relative priority between fixing bugs and doing other things, but this meta-debate about "well technically it complies with this spec so is it even a bug, really?" just never comes up. We all know what bugs are.
Fair enough. I considered a bug to be any behavior the engineers didn't plan in the code. They have their own specification, in their heads, that is more technical/exact than the business specification. Your definition is also reasonable but it's not what people mean when they say "there's no such thing as bug-free code", because bugs of my definition are almost unavoidable.
To have no bugs, which is extremely unlikely for a program of any real complexity. Having bugs, and being functional, are fairly self-explanatory and independent of each other. No need to try to conflate them.
Not sure what your quote is supposed to mean. That's a textbook example of someone who doesn't understand software at all making laughable requests of their engineers.
To be bug free we must be able to define what a bug is. So, what is a bug?
The reason for that quote is from what you have said a bug would be anything you didn't expect, even if it is consistent or not with the specification as that merely affects if we classify it as functional or not (a classification I profoundly disagree with, obviously). It is simply a negative rephrasing of what the marketing guy said and laughable in the same way.
> One plausible definition is “system deviates from its specification”
And that's quite reasonable. So I actually retract my argument.
For my own definition, I was considering a bug to be any behavior that the software engineers weren't expecting. Because those can exist invisibly for a long time until they become so bad they become visible. They can also exist for decades without causing any problems to functionality at all.
Not buildings, not bridges, not cars, not airplanes, not software. There are mistakes in every field of engineering and the best we can hope for is to minimize them as much as possible.
Engineering is knowing (among other things) how to categorize the potential mistakes, develop procedures to reduce the chance of them being made and in the case that some slip through (and they will), estimate their impact and decide when you're "good enough."
Reminds me of a story about an engineer who was participating in a meeting with managers and partners. Manager was speaking of his team and how they will deliver software. Then, he asked the engineer to assure that the software will be bug-free. To this the engineer responded by saying he cannot guarantee there will be no bugs. The manager went nuts and started screaming.
Engineers cannot be responsible for all the vertical stack and the components which were built by others. If somebody claims it is bug free then they have not enough experience. Pretty much anything can fail, we just need to test as many possible cases as possible with a variety of available tools to reduce the chances of bugs.
You might be correct today but that’s a pretty sad state of affairs, don’t you think we can do better? Most other engineering domains can deliver projects without bugs, with various definitions of “bug” of course
I'm not sure about that. Which engineering domain do you have in mind?
Maybe show-stopping bugs are somewhat unique to software engineering, but all somewhat-complex products are flawed to some extent imho.
It might be an unergonomic handle on a frying pan, furniture that visibly warps under the slightest load (looking at ikea shelfing) or the lettering coming off the frequently used buttons on a coffee machine.
But there do exist shelves that don’t warp, when used within some reasonable bounds.
I’d also quibble about the buttons on the coffee machine. They might be properly designed, just subject to the normal wear-and-tear that is inevitable in the real world. This is not a defect, physical devices have finite lifespans.
As far as computers go… if we got to the point where the main thing that killed our programs was the hard drives falling apart and capacitors drying out, that would be quite impressive and I think everyone would be a little bit less critical of the field.
Formally verified, bug free software exists. It just costs a LOT to produce, and typically isn't worth it, except for things like cryptographic libraries and life or death systems.
As the discipline has evolved, the high integrity tools are slowly being incorporated into typical languages and IDEs to generally improve quality cheaper. Compare C++ to rust for example, whole classes of bugs are impossible (or much harder to make) in rust.
A shelve is a dumb primitive static object though. Even a simple hello world goes over a huge amount of lines of code before it is displayed on a screen, ANY one of which being faulty could result in a bug visible to the enduser. And most of that is not even controlled by the programmer — they might call into libc, which calls into the OS, which calls into drawing/font rendering libraries, that calls into video card drivers that “calls” into the screen’s firmware.
I think “hello world” is not really the simplest program in this context, in the sense that printing, as you note, involves touching all that complicated OS stuff. In terms of, like, actual logic complexity implemented by the programmer compared to mess carried along by the stack, it is really bad.
But I mean, I basically agree that the ecosystem is too complicated.
To be an engineer is to know the expected system requirements and build a product that is extremely optimized for the system requirements.
There's a saying that I think fits very well here: "Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands."
You don't want a bridge to cost 50 years and quadrillions of dollars to build, you want a cheap bridge safe for the next 50 years done in 2 years.
I would not call the resulting bridge "bug free", of course.
We can certainly do better, but it takes a _lot_ of time, effort, care and discipline; something most teams don't have, and most projects can't afford.
Bugs arise from the inherent complexity introduced by writing code, and our inability to foresee all the logical paths a machine can take. If we're disciplined, we write more code to test the scenarios we can think of, which is an extremely arduous process, that even with the most thorough testing practices (e.g. SQLite) still can't produce failproof software. This is partly because, while we can control our own software to a certain degree, we have no control over the inputs it receives and all of its combinations, nor over the environment it runs in, which is also built by other humans, and has its own set of bugs. The fact modern computing works at all is nothing short of remarkable.
But I'm optimistic about AI doing much better. Not the general pattern matching models we use today, though these are still helpful with chore tasks, as a reference tool, and will continue to improve in ways that help us write less bugs, with less effort. But eventually, AI will be able to evaluate all possible branches of execution, and arrive at the solution with the least probability of failing. Once it also controls the environment the software runs in and its inputs, it will be able to modify all of these variables to produce the desired outcome. There won't be a large demand for human-written software once this happens. We might even ban software by humans from being used in critical environments, just like we'll ban humans from driving cars on public roads. We'll probably find the lower quality and bugs amusing and charming, so there will be some demand for this type of software, but it will be written by hobbyists and enjoyed by a niche audience.
A saying that I once heard and appreciate goes like this:
"A programmer who releases buggy software and fixes them is better than a programmer who always releases perfect software in one shot, because the latter doesn't know how to fix bugs."
Perhaps similar to the saying that a good driver will miss a turn, but a bad driver never misses one.
I think you misunderstand, I'm talking about a programmer who makes perfect, bug-free code in one shot. There are no bugs to catch and fix, because this "perfect" programmer never writes buggy code.
The moral of the sayings is, that "perfect" programmer is actually a bad programmer because he wouldn't know how to fix bugs by virtue of never needing to deal with them.
To reuse the driver analogy, the driver who never misses a turn is a bad driver because he doesn't know what to do when he does miss a turn.
If a software developer consistently delivers high-quality software on time and on budget, that means they're good at their job, pretty much by definition. It would make no sense to infer they're bad at fixing bugs.
It would make sense to infer instead that they're good at catching and fixing bugs prior to release, which is what we want from a software development process.
> the driver who never misses a turn is a bad driver because he doesn't know what to do when he does miss a turn
Missing a turn during a driving test will never improve your odds of passing.
The driver who never misses a turn presumably has excellent awareness and will be well equipped to deal with a mistake should they make one. They also probably got that way by missing plenty of turns when they were less experienced.
What we are discussing isn't a real programmer we might actually find. No, we are talking about a hypothetical "perfect" programmer. This "perfect" programmer never wrote a bug in his entire life right from the moment he was born, he never had a "when they were less experienced" phase.
Obviously, that means this "perfect" programmer also never debugged anything. For all the perfect code he writes, that makes him worse than a programmer who writes buggy code but also knows how to go about debugging them.
There are Computer Engineering programs and a few universities that really emphasize internships and hands on practice. But at many universities, the CS department came out of the Math department and is focused on theory. Chemistry isn't Chemical Engineering either. I think that's okay. University isn't just a trade school--the idea behind almost any degree is to train the mind and demonstrate an ability to master complex material.
What society needs is a mix of trade school a traditional university. If a university is not providing both they are failing everyone. (except the straw-man rich kid who will inherit a lot of money but not be expected to either also inherit/run a company or pass the money onto their kids - this is something that happens in story books but doesn't seem to be real world where the rich give their kids lots of advantages but eventually expect them to take over and run the family business)
A pure university education without considering is this degree useful in the real world is a disservice to education. However a pure trade school education that teaches how to do something without understanding is not useful (I don't think any trade school is that pure: they tell you to ignore hard stuff but generally give you deep understanding of some important things)
> If a university is not providing both they are failing everyone.
Why?
> A pure university education without considering is this degree useful in the real world is a disservice to education.
I think this line of thinking is a much bigger disservice to higher education. It was very tiresome as an undergraduate to be surrounded by people that thought this way - and detrimental to everyone's education.
"I'll never use this knowledge" is the single worst thing you can say as a student, and it needs to be beaten out of undergrads' heads. Not encouraged.
Because like it or not most people are going to university to get a better jobs. Companies like university educated people because they learn deep thinking. However they often come out lacking important skills that are needed.
Sure there are a few going to university just for the fun of it. However most are expecting a job. Thus universities should train and emphasize thinking in more specific areas.
> "I'll never use this knowledge" is the single worst thing you can say as a student, and it needs to be beaten out of undergrads' heads. Not encouraged.
This is tricky. I agree undergrads say this all the time when they are wrong but they don't know it. They have no clue what they will use and what they won't. This is something universities should figure out so they push people to avoid things they won't use. OTOH, a lot of what they are really teaching isn't the specific skill, but how to research and analyze data to find complex answers - it doesn't matter if you look at data from art or from science, what you are really learning is how to think and the specific knowledge gained is isn't important or the point (I think this is the point you were trying to make?).
> However they often come out lacking important skills that are needed.
Companies that offer the jobs are the ones that need to offer the job training.
> (I think this is the point you were trying to make?)
Not really, it's that university education is kind of meta/self serving (the goal is not to train X number of students to do Y jobs, it's to give every student at the institution what that institution defines to be an education).
But like you said, the fact this produces better workers is a second-order effect. It's not the goal of most institutions. But not all institutions; some define "well educated" to have lots of industry practicum, and if you want that, go study at those institutions.
My main point is that it's not a "disservice" to eschew practicum or industry training as an educational institution.
What society needs is the second order effect though. I don't care about education for the sake of education, I can for what education can do for me/society. Now some of what most institutions define as a good education is good for society (the ability to think is very useful), but I don't value/support education because of arbitrary definitions that an institution might come up with. I value/support education because people who have education tend to show specific abilities in society that I want more people to have. The more universities are in line with that and try to produce that the more I value/support them. (note that I didn't not formally define what those things are - this is a tricky topic that I'm sure to get wrong if I tried!)
When institutions allow student to take degrees that society finds less valuable (art,music...) they are doing society a disservice by not producing what society needs. Now if the student is wealthy (not rich) enough to afford that the price then I don't care: I don't need to impose my values on anyone else. However most people in a university are not that wealthy (most are young) and so if the degree granted isn't valuable to society the university robbed that student.
>When institutions allow student to take degrees that society finds less valuable (art,music...) they are doing society a disservice by not producing what society needs.
1. what's wrong with a student pursuing their own personal goals? A person doesn't need to produce for society's sake.
2. despite that sentiment you hold, it's clear many people do value art and music. Maybe not in its pure form, but those artists do in fact fuel industries worth billions. Clearly "society" values something that requires such skills and thinking.
> Companies like university educated people because they learn deep thinking.
No. Companies love hiring higher-ed graduates because it removes a lot of cost and risk for them:
- hiring only people with degrees weeds out everyone unable to cope with a high-stress environment, for whatever reason - crucially, also including people who would normally be protected by ADA or its equivalent provisions in Europe.
- it weeds out people in relationships or with (young) children, which makes them easier to exploit and reduces the amount of unexpected time-off due to whatever bug is currently sweeping through kindergarten/school/whatever. Sure, eventually they will get into relationships and have children as they age, but looking at the age people start to have kids these days [0], that's a solid 5-10 years you can squeeze them for overtime.
- it saves companies a ridiculous amount of training. The old "tradespeople apprenticeship" way is very cost-intensive as you have to train them on virtually anything, not just stuff relevant to the job, e.g. using computers and common office software. Instead, the cost is picked up either by the taxpayer (in Europe) or by the students themselves in the form of college debt. The latter used to be reserved for high-paying jobs such as pilots who have to "work off" their training cost but got compensated really well, nowadays it's standard practice.
- it keeps the employee diversity relatively homogenous. There is a clear bias towards white and asian ethnicity in the US for higher ed [1], and among top-earning job, males still utterly dominate [2].
- related to the above, it also weeds out people from lower economic classes, although at least that trend has been seriously diminishing over the last decades [3].
I agree with you in principle, but it's very easy to have this attitude when the education isn't obscenely expensive.
Which is why the "I'm never going to use this, what a waste of time" feeling among American undergrad students is so common.
If you fix the affordability problem and bring it back to where is was in the mid 70s (inflation adjusted) I think things would be a lot better.
My point is that higher education isn't job training and doesn't pretend to be, and people who think it is or should are the ones that need education the most because they don't seem to get it.
I'm not sure about this part... A very common pattern in my conversations with working class friends and family from my parents' generation is: "we were told that if we sent our kids to college, they'd have better lives than we did, but instead we all just ended up with more debt than we could handle".
It's tricky! If you tell teenagers and their parents the truth - this purely academic program will not train you for any job besides pure academia, which, while it can be a fantastic career, is a super risky hits business in which only a few will truly succeed - then that's only going to sound like a reasonable risk to take for wealthy families. But then you've badly limited your pool of academic researchers to this extremely small and honestly often not as promising set of rich kids.
Maybe one solution (which is not workable in the real world) would be: any academic program that does not have a viable "job training" component should only accept students on academic scholarship, regardless of their own means. If some neutral party thinks they are promising enough in that field to pay their way, they get to go for free, otherwise they don't get to go at all. The programs that do graduate people with directly marketable job skills could keep working the current mercenary way.
The reason this wouldn't work in reality is that the wealthy would still just game the scholarships in some way. Alas.
There is a big difference in value between different degrees in the real world. Yet the costs are similar. What someone studies is very important and universities do not do a good job of telling people that.
There is nothing wrong with art/music/history. If you are interested by all means take a lot of courses in them. You can learn a lot of valuable skills which is why good universities required a diverse background of "generals" that these (and many more) fit into. However they give far more degrees in these things than are needed. (even physics gets more degrees than the world needs - but most getting a physics degree can better pivot to something else well paying).
If you want to know what a university will teach your kids, ask them. They'll even tell you without asking them - it was pretty obvious to me as a dumb high school kid on campus visits what the emphasis of one program or another was going to be.
What I'm saying is: universities are incentivized to mislead people (including themselves!) about this.
If you are a working class family with a kid who is very talented at math, and you go sit down with the counselors and ask them: If my child studies pure theoretical math, will that open them up to a life full of possibilities? they will say "yes, it absolutely will". But that's not true. It might be true, but it's a big risk. It's a risk a wealthy family can very easily absorb. But if this child from this working class family takes on this risk using student debt, it might go poorly. They might very well be good at pure math but not be good enough to go into academia. Then they might be unsure what else they can do with that degree, unable to get their foot in the door at the kinds of employers where just a general proof-of-being-smart degree is enough. And now they have debt and uncertainty about what to do.
It also might work out great! But it's a risk. And I know a number of people who feel they ended up on the wrong side of that risk.
>My point is that higher education isn't job training and doesn't pretend to be, and people who think it is or should are the ones that need education the most because they don't seem to get it.
That was true 50 years ago, but employers turned it into job training. My father in law retired a well off businessman with a History degree from Yale he got in the 50s. You know what a History degree from Yale qualifies you for today? Teaching History and maybe writing some books. The degree didn't change and Yale didn't change.
No, I don't think that's it. I think it is simply that you have to put an awful lot of people through the explore part of the learning loop, to get a handful who will reach the exploit part of the loop, for any given subject.
99% of what we all learn in college is a waste of time for us. But we all have a unique 1% that is vital to who we become. Over time I expect that 1% to become 0.1%, then 0.01%, and for that vitality to become ever more concentrated in that sliver.
>"I'll never use this knowledge" is the single worst thing you can say as a student, and it needs to be beaten out of undergrads' heads.
Everyone will think differently. I've never truly be research-minded and there's very much a bunch of odd classes that felt like a waste of my money (something to consider as education gets more expensive). But I do agree that there should be a space to foster researchers and especially one to overall round out a student, even if that space is more niche. I just don't think that everyone needs to go far into debt for that experience if they just want job training.
So I too desire a more explicit divide than "research university vs. industry university" and wish there were some better trade schools focused on software (not 6 month boot-camps. Think of a condensed university program without requirements of electives and maybe less supporting classes). But no one seems to be protesting this much.
Strongly disagree with this. If a class (at any level) is strictly teaching "the subject" then that is a very good issue to raise by a student or anyone else. Great teachers don't just teach the subject though, they teach the skills necessary to engage with the subject and then apply them to said subject.
Unfortunately many programs are not designed this way and learning the appropriate skills is left as an exercise to the student usually in a sink or swim approach. So some students come out with the meta skills that a university education is touted for and others do not.
I do agree that "I'll never use this knowledge" can be a miserable attitude to have or engage with - especially when it's just a proxy for "I'm not interested in learning, just in getting good grades" but the idea itself is valid.
Yeah but at those internships you aren’t taught how to build quality software, just how to ship a SPA that connects to an API in 15 weeks (or you’re not hired).
It is a good peek into the professional software world though!
Before you can write quality software you need to be able to write large software. Most interns I see are learning how to work on non-trivial programs as this is their first chance to see something non-trivial. Then they get a "real job" and are shoved into extremely large programs.
Writing a thousand lines of bug free code isn't that hard, so the need for QA practices won't be apparent. Then you get the ten thousand line intern project and discover things are not always that easy. Then we throw you into a multi-million line project and good luck: you need a lot of that QA stuff to make it work.
You learn this at quality shops. 10-15 years ago roughly FAANG.
Today? TailScale and stuff like that.
You can just noT have a bunch of pointless micro services and docker in your runc and layer on layer of json de/re, and unit test to get coverage but ignore quickcheck and hypothesis and fuzzing.
You can use stacked diffs and run oncall loops out of the team authoring the code and all of it.
You can minimize dynamic linking and all the other forms of unforced dependency error.
You can understand and play towards the runtimes of all the managed languages in your stack. You can insist that a halfway decent verbal grammar for the languages is “readability”.
It gets shouted down over and over but it’s public knowledge how to ship quality software.
OP is saying that 10-15 years ago FAANG companies (and a few more) were the only ones writing quality software. Now, FAANG doesn't care anymore but there are new unicorns in the making that do care - TailScale being one of them (debatable)
It's not that, it's that quality doesn't matter and mostly neither does technology.
If you are doing network switches or GPUs or server CPUs or whatever, yeah, technology matters. If you are building pretty much any SaaS, MCCA, etc. the tech is literally irrelevant and mostly the more "new" tech you use the worse off you are.
Quality also only matters in some contexts and those are even rarer than the above.
Timing and solving a problem is all that matters in terms of revenue. As long as you aren't too bad, it's fine. The only real five-nines requirement for SaaS is that 99.999% of them would be fine with one or two nines.
They are outliers. Most startups focus on feature velocity and sales first, quality is an afterthought, regardless of what they might say. Also, if your project is open source, quality has a lot more weight towards uptake.
Any advice on how to find these quality shops and this quality knowledge? Any particular things to look for, or particular books/courses you recommend?
> 'If we don't do it now, development efforts (and therefore also costs) will be up 15% in 4 months.'
Yeah you won't get to a point where you'll have a valid-enough metric to make this point.
I was at a startup once. The two founders said "don't write unit tests". I wasn't going to argue with them. I understood what they really meant. We've been too slow, we need to ship as fast as possible. I shipped fast and I shipped quality (ie low defects and outages). I wrote unit tests. They didn't need to know. They just needed the outcome.
The elephant in the room in all of these conversations is that you walk into any software development shop and they just don't how to ship both fast and at quality. No matter how much an organization or team tries to revisit/refactor their dev process year-to-year, they're still shipping too slow and mediocre quality.
The truth is there isn't a magic formula. It's an individual craft that gets you there. It's a team sport. And the context is different enough everywhere you go, yeah, sure, some lightweight processes might abstract across the universe but very little bang for those bucks. Far beyond any other facet of things, you really just have to have a good team with experience and the right value-system/value-delivery-focused wisdom.
1. The premise that college teaches you how to build software in industry is a pretty wild claim.
2. Is this article from the 90s where we ship software on CDs or floppy disks? In today's world where the concept of a "release" is often blurred by continuous delivery pipelines (and this is considered a good practice), having a quality insurance department manually assuring that no bugs are in that release seems downright archaic.
Not everyone is writing a webapp where you can roll out upgrades anytime CI passes, or a phone app that you can upgrade every week. some of us work on code that will be shipped in a device that is not easy to upgrade.
Absolutely, and industries like medical devices and aviation have extremely strict regulation and procedures regarding the testing of software. The article not mentioning any of those made me conclude that author is referring to regular software.
From the article -- At some point, I realized that I wasn't using the right arguments either. Explaining that the software will be 'more stable' or 'make maintenance much easier' is not palpable for someone who doesn't work in the codebase themselves. We need to speak about money. As developers, we need to speak about the cost of not doing QA. This is the language of business and managers in general.
The more general way of saying this is, "If you need someone's approval to do something, explain it as a good idea for them." Took me a bit to learn this in engineering, you would think "but it will be correct!" would be an unassailable argument but as the author notes, the person who has to sign off on it may not care if it is correct, even if you passionately do care. This works for everything at the job, not just fixing software correctly.
I've found the only true way out of this hellish situation is to work at a place mature enough for the leadership to already understand all this.
If you have to explain why quality matters, they're at least as ignorant as you if not more. They deserve their fate. The upshot is you'll probably also get paid way better and more quickly develop a better sense of how business is supposed to be done at a mature company.
Of course you also have to deliver on your promises that all the time you're spending will improve things and isn't just some amateur quixotic itch.
As sibling comment `@wellpast` commented, but to extend it with my point of view.
We can roughly say we can have three out of: (high) quality, (low) time, (low) communication complexity, and (low) money. (time is a dependent here.)
People are trying to apply factory processes and structures to a team sport, an engineering discipline. You do not teach or build a basketball team by breaking down each attack phase into steps and checkmarks.
You try to minimize communication and make the team work as one. It is a team and individual building, not a process building exercise. You make a plan, and follow the Moltke's the Elder conclusion:
"no plan of operations extends with any certainty beyond the first contact with the main hostile force."
(Or paraphrased as you have heard: No plan survives contact with the enemy.)
All (types of) Engineers know this. But software engineering is "special."
And it is not a "move fast and break things issue."
That is part of all engineering or team playing too.
It is the type of business mentality, that because a plan did not go exactly as expected we need to add more process. Whatever that process may be. Because if "I as a manager add a process, then the next failed plan, I am covered, and I will blame the individuals."
Process has a place to ensure things happen in a legal and moral framework. And minimize adverse circumstances -- e.g. we bet all the hedge fund money accidentally when running tests.
Process is used differently in most startups and corporations with not the team in mind.
The construction metaphor is a bad analogy. The compiler does the construction, dev teams do iterative design, ideally with frequent feedback and adjustment.
Do you ever yell at a traditional architect and ask them when it's going to be done? It's always when the client is happy or makes their mind up about it. A lot of dev is like this.
Is there any human activity where quality is an attribute successfully taught? In my experience, being able to produce something of quality is gained only through practice, practice, practice.
> Is there any human activity where quality is an attribute successfully taught?
Every industrial practice.
On the other hand, the title just means that programming is not an industrial practice. What should be obvious to anybody that looked, but some people insist on not seeing it.
For pilots, there are many filters to ensure people that failed to learn can't take many responsibilities. They ensure the pilots study and train, but there isn't any theory making sure the pilots learn and get the best safety practices. (In fact, if you are comparing with CS teaching, pilot teaching will give you a heart attack.)
For engineers, the situation is very similar to software. There are many tools for enforcing quality, but there's no structure for teaching the engineers, and no, there isn't a widely accepted theory for how to teach design quality either.
The one place where people are consistently taught how to build quality is on manufacturing.
Aviation in particular has a very strong culture around (government mandated) checklists and post-crash investigations. This has both pros and cons. The pros is that every airline learns from the mistakes made by every other airline and over time the system becomes really quite safe indeed. The cons are that it is quite expensive and time consuming.
Imagine if every software company was obliged by law to:
- Every single release has to have been signed off by someone who got their "software release engineer" certification at the software equivalent of the FAA.
- This engineer is required by law to not sign off unless every box on a 534 item checklist has been manually verified.
- Any time an unplanned downtime happens at any company, a government team comes in to investigate the root cause and add points nr 535 through 567 to the checklist to make sure it never happens again.
If such a system was mandated for software companies, most of the common bugs would very rapidly become a thing of the past. Development velocity would also fall through the floor though, and most startups would probably die overnight. Only companies that could support the overhead of such a heavyweight process would be viable, and the barrier to entry would massively increase.
I wish someone would create that 500 line checklist. I've seem attempts, but they tend to be either not actionable (is the software high quality - meaningless), or of metrics that are just gamed (is test code coverage > 80%?)
If I understand his theory correctly, in your case there would be a competing metric to the "test coverage" one that said for any changeset, a test cannot itself change by more than 20% in the same changeset as non-test code. So you can change the code such that it still passes the existing tests, or you can change the test to adapt to new requirements, but you cannot rewrite the tests to match your newly changed code
I'm acutely aware this is a terrible example, the devil's in the details, and (in my experience) each company's metrics are designed to drive down their own organizational risk <https://en.wikipedia.org/wiki/Conway%27s_law>, combined with "you're always fighting the last war" :-D
> > Aviation in particular has a very strong culture around (government mandated) checklists and post-crash investigations
That's the reason why aviation can only shine when it becomes a private means of transportation, and I don't mean 70mm private jets but, 150k light helicopters.
When a critical mass is hit then accidents will become no more traumatic to the collective psyche than car accidents, the lighter the aircraft the better because it would seem exactly like a car crash as opposed to leaving a huge burning hole into the ground
Citation needed. The key to the industrial revolution was trivializing the human work so as to take as many human errors out as possible and to systematize everything. I wouldn't call that type of process "teaching quality".
Good teaching largely consists of setting the learner up in situations where they can practice effectively. To pick just one example many people are taught to improve the quality of their writing. This largely consists of giving guidance on what writing to attempt and (more importantly) guidance how to reflect on the quality of the writing you've just done so you can improve.
The arts. The further and further you go in instruction, the more it becomes about the little differences and quality. Practice always helps, but quality definitely taught and learned by many as well.
I cannot fly professionally anymore due to health, but this is something we are taught in aviation and something I too find lacking from tech so far.
Like, you’re taught the standards as part of learning to fly, but as time goes on, you’re told to narrow your tolerance of what is acceptable. So if you are learning how to do steep turns, for instance, the altitude standard is +- 100’. You’re taught, “that’s the minimum, you should be trying for 50’” and then 20’, then the absolute best performance would be where you do your turn, the needle doesn’t move, and as you roll out on a calm day you fly through your wake. But the goal is “better, always better, what can I do better?” And flying is not graded on the overall performance, if you don’t do satisfactory everywhere you fail. Culturally satisfactory, isn’t, it’s the starting point.
That encourages a much more collaborative model I feel like. I’ve only worked one or two flying jobs that were not collaborative. In the outside world it sometimes feels the opposite. In flying, you truly want everyone to succeed and do well, and the company does. Even the guys I hated that I flew with, I didn’t want them to fail. If they failed, I was partially responsible for that.
It wasn’t always perfect, and I worked with some legendary assholes while I was flying, but truly, they supported me and I supported them, and if I screwed up (or they screwed up) the culture required that we found a way to minimize the future potential screwups.
You’re actually trained on what quality means in a wide variety of contexts too, and flight operations quality assurance (FOQA) is a big part of many airlines. In smaller bush operations where I worked, it is significantly more informal, but we truly had a “no fault” culture in nearly all the places I worked. It’s not perfect, but that’s the point, “ok how can we make this better?”
If someone had an idea for how to do something better, there may have been friction, but that was rare if it actually was better, and as soon as you could show how adoption was fast even at the less standardized operations I worked at.
Not saying it’s all unicorns and rainbows, but I feel like quality, and decision making, and “doing the right thing” were an integral part of the culture of aviation. “The weather is too bad and I cannot do this safely” grounds the flight, you don’t blast off into the shit (at reputable operators) to get the job done anymore (it’s not 1995), and it feels like this new industry is the opposite.
The entire concept of a “minimum viable product” is somewhat illustrative of the problem. It shouldn’t be the “minimum viable” it should be the “minimum quality product we’re willing to accept as a starting point.” But that doesn’t roll off the tongue in the same way.
We shouldn’t be striving for anything that’s the “minimum.” The minimum is only the beginning.
> But the goal is “better, always better, what can I do better?”
Is that not the case in software? The incentive to improve may not be quite as strong as in aviation (crashing software isn't quite the same as crashing airplanes), but it is still pretty strong. Life is very miserable in software when quality isn't present.
What happens when you work under a group of people who are satisfied at stage one of project X? You know you can iterate to get two stages further, but they want you to work on projects Y and Z. This is a very common situation where you, or even the whole development team has very little control.
Of course, management should be supportive of quality improvements, but their reality is either one where they are under genuine pressure to deliver projects X and Y to stage of quality through to not understanding or caring about quality.
My own experience is that individual programmers have vastly different ideas of quality is based on their experience and education. You can be struggling to get a team to improve and then you hire a somewhat normal individual with a very different background who makes a sizeable impact on quality and the culture of this in the team. I'm thinking specifically of someone who joined from aerospace, but I've seen it with finance backgrounds. I think the background matters less than the perspective and ability to hold people accountable (including yourself.)
> What happens when you work under a group of people who are satisfied at stage one of project X?
No doubt the same as when the members of your garage band are happy to stay in the garage while you have your sights set on the main stage. You either suck it up and live in the misery, or you get better on your own time and leverage those improvements in the quality of your performance to move into a better position where quality is valued.
Practice only matters if you try to produce quality. If you just practice producing crap you'll only get good at producing crap. But I suppose if someone doesn't care about quality (and these people do exist) all bets are off really.
There is a corollary. You need to learn how to build quality software. You also need to know what level of quality vs completeness tradeoff you need to make.
Imagine I have a deadline of Jan 15th to demo features x, y, and z to senior leadership (who then will make funding decisions that could impact my project), and I get to Jan 15th with x, y, and not z - or worse, none of them working fully. But the code quality is high, there are no TODOs hanging around, no extra development focused logging, no duplication of code, no leaky abstractions.
That is a 100% fail in leadership's eyes, unless you have a really good story on why z didn't get shipped (and code quality is NOT that).
All those things I listed will have to be addressed at some point, and the longer they go without being addressed the harder they will be to address. But if you need to do that to meet deadlines, then you do it.
Of course, if you are in a place where leadership allows you to work from a backlog and demonstrations and features to demo are scheduled not arbitrarily based on leadership's schedule/interest, but on the features that are newly shipped since the last demo, then you are in luck.
At the end of the day the important thing to remember is that you are not being paid to build software. You are being paid to provide a solution to your customer's problem. Other than CTOs and some forward thinking leaders, they don't care about the software. They care about whether the problem is solved, did it cost me more or less than expected in labor and materiel, and is it compliant with necessary laws/regulations.
Exactly, there's no point in a startup having _perfect_ code, great test coverage, and an excellent CI/CD pipeline if they've not gotten to market before the cash runs out.
Though I think that if leadership are not consulting the Engineers on timelines, instead just dictating the timelines to them, then there is a massive problem afoot.
or maybe, them who know how to do this are just unable to spread this knowledge... something about how they think their private secret codes are the source of their wealth
when in fact, it's merely the scheme by which they mantain an advantageous capacity to extract energy from them seeking to learn how to build quality software
There's an obvious comprehensibility complexity to code to anyone who has spent almost any time what so ever trying to make something happen in software. However, we've got zero academics or theory around it.
Just 'best practices' (ie a thing other people are known to do so if things go wrong we can deflect blame).
And code smells (ie the code makes your tummy feel bad. yay objective measures).
And dogma (ie "only ONE return point per function" or "TDD or criminal neglect charges").
Sure, please do something for QA because it'll be better than nothing. But we're probably a few decades of waiting for actual theoretical underpinnings that will actually allow us to make objective tradeoffs and measurements.
There is plenty of academics on it, as real engineers, those that studied Software Engineering or Informatics Engineering, instead of fake engineering titles from bootcamps, should be aware.
Usually available as optional lectures during the degree, or later as Msc and PhD subjects.
Although, so far I've only bumped into cyclomatic complexity (with some studies showing that it has worse predicting power than lines of code) and lines of code.
I don't know. I was hoping for something like: "We know inheritance is bad because when we convert the typical example over to this special graph it forms a non-compact metric space" Or something like that.
Even though I find cyclomatic complexity uncompelling, it at the very least can slurp up code and return a value. Nicely objective, just not particularly useful or insightful to whether or not things are easy to understand.
The provided link looks suspiciously like they're going to talk about the difference between system, integration, and unit tests. The importance of bug trackers. And linters / theorem provers maybe.
I don't think these are bad things, but it's kind of a statistical approach to software quality. The software is bad because the bug chart looks bad. Okay, maybe, but maybe you just have really inexperienced people working on the project. Technically, the business doesn't need to know the difference, but I would like to.
I don't suppose you know where I can get their list of references without hitting a paywall? Specifically [16] and [24].
EDIT: [For anyone following along]
The linked paper is Measuring Complexity of Object Oriented Programs. Although, the paper isn't free. They reference several other papers which they assert talk about OO complexity metrics as well as procedural cognitive complexity, but unfortunately the references aren't included in the preview.
Apparently, there's also a list of Weyuker's 9 Properties which look easier to find information on. But these look like meta properties about what properties a complexity measurement system would need to have [interesting, but they don't really seem to comment on whether or not such measurement is even possible].
It looks like a lot of this research is coming out of Turkey, and has been maybe floating around since the early 2000s.
EDIT EDIT: References are included at the bottom of the preview.
EDIT EDIT EDIT: Kind of interesting, but I'm not sure this is going to yield anything different than cyclomatic complexity. Like, is this still an area of active research or did it all go by the wayside back in the early 2000s when it showed up? The fact that all the papers are showing up from Turkey makes me concerned it was a momentary fad and the reason it didn't spread to other countries was because it doesn't accomplish anything. Although, I suppose it could be a best kept secret of Turkey.
Renamed programs are defined to have identical complexity, which is pretty intuitively untrue, so I've got my concerns.
EDIT ^ 4: Doesn't seem to be able to take data complexity into account. So if you're dividing by input, some inputs are going to cause division by zero, etc. You might be able to jury rig it to handle the complexity of exceptions, but it looks like it can mostly handle static code. I'm not sure if it's really going to handle dynamically calling code that throws very well. I also don't think it handles complexity from mutable shared references.
Nice try, but unless there's a bunch of compelling research that no actually this is useful, I'm not sure this is going to cut it. And at the moment the only research I'm finding is more or less just defining functions that qualify as a cognitive measure under the Weyuker principles. I'm not seeing anyone even pointing it at existing code to see if it matches intuition or experience. Happy to be found wrong here, though.
The scientific groundwork for excellent testing, anyway, has already been done-- but not in the realm of computer science. This is because computer scientists are singularly ill equipped to study what computer scientists do. In other words, if you want to understand testing, you have to watch testers at work, and that is social science research. CS does not take social science seriously.
An example of such research done well can be found in Exploring Science, by Klahr. The author and his colleagues look very closely at how people interact with a system and experiment with it, leading to wonderful insights about testing processes. I've incorporated those lessons into my classes on software testing, for instance.
You may still not buy into it, but note that single exit was established for languages like C where an early exit can make it difficult to ensure that all resources are freed. It isn't meant for every language – and, indeed, languages that are not bound by such constraints usually promote multiple exit because of the reasons you bring up.
And even that is wrong, single entrance/exit was originally because you had subroutines designed to be goto'd into at different points for
different behavior and would goto different points outside the subroutine as the exit.
There are pretty much no languages left today where it's even possible to violate this principle without really trying, it's not about having single a return it's about all the functions starting at the top and return statements always taking you back to the same place in the code.
I wish more ppl felt this way. What a compliment it is to oneself when I hear ppl saying "write clean code" as if they know its address and had dinner with clean code just last night.
I was thinking there should be some metric around d(code)/dt . That is, as the software is used, 'bad' code will tend to change a lot but add no functionality. 'Good' code will change little even when it's used mode.
d(code)/dt isn't a very good metric though. Think of the Linux kernel. Drivers get some of the least maintenance work and are broadly the lowest quality part of the kernel. arch/ is busier than drivers/, but anything you find in the parts being touched are also significantly higher quality.
> you cannot be taught what nobody knows how to do
It's worse than that. No one can agree what "quality" means.
Mostly, the word is used as a weapon.
The pointy end of the weapon is what management pokes you with whenever anything unexpected happens. Typically they do this instead of making sure that problems do not happen (a.k.a. "management").
The weapon's handle is sometimes flourished by process gatekeepers who insist on slowing everything down and asserting their self-worth. This is not good for throughput, anyone else's mood, or eventually even for the gatekeepers.
People usually refuse to talk about quality in terms of actual business metrics because if anything unexpected happens that's not covered by the metrics, there will be fault-finding. And the fingers pointed for wrong metrics are typically pointed at middle management.
Like with all things quality is proven through practice and measures, which means quality can only be guessed at before a product is built.
Like in all things here is how you do it:
1. Build a first version that works and accomplishes all initial business requirements.
2. Build test automation or inspection criteria.
3. Reflect upon what you built and watch how it’s actually used in production.
4. Measure it. Measure performance. Count user steps. Count development steps. Determine real cost of ownership.
5. Take stuff out without breaking business requirements.
6. Repeat steps 2-6.
That is how you build quality whether it’s software, manufacturing, labor, management, whatever.
In my own software I devised a new model to solve for this that I call Single Source of Truth. It’s like DRY but hyper aggressive and based upon empathy instead of micro-artifacts.
> Neglecting QA is a shame because 90%+ of all students work in a company context after they finish their degrees. It will be necessary to deliver software without bugs in time.
once again bringing up the hot debate of "are colleges job prep or research institutions"? Many students these days will proceed to grab a job, but is that something a university should strive to be?
I wish at the bare minimum there were more proper apprenticeships for companies that want some specific type of software engineer instead of 3 month vibe checks as it is right now. Or bootcamps made more or less to study to the test instead of what actually brings value. But I guess no one is really chomping at the bit to change the status quo
> is that something a university should strive to be?
Universities taking public money (including students government grants/loans) should strive to make society better. Part of that is getting kids into good jobs that society needs done.
There are a few "retired" people taking classes that they are paying for on their own just for fun. If those people think they are getting value despite taking subjects society doesn't value I'm fine with that. The majority of students though are young people that society is trying to invest in to make a better future, and so the more universities prepare those kids to make a better future the better.
People making roads and building powerplants also take government money. Do you expect them to prepare people for software dev jobs too?
Just because companies decided to start requiring degrees because they're too stuff to pay their own staff, IMO they shouldn't get to divert universities from their original mission, which is education and research. Afterall, those are also important to society, and if they don't do it, who will?
I wonder how much of the lack of QA is rational. That is to say, for most projects does shipping with lots of somewhat hard-to-find bugs actually hurt the bottom line?
For some classes of bugs it can (e.g. if the software is so bad as to open you to a class-action lawsuit; in b2b software bugs that put you in breech of contract), but for many classes of consumer software, it's not clear to me that shipping software that works better is rewarded. Picking not-too-buggy software ahead of time is hard, people are slow to switch (even when the people encountering the bugs are the people selecting the software, which is often not the case), and people are good at subconsciously avoiding triggering bugs.
It starts mattering more for consumer software when you reach mass scale. Somewhat hard-to-find bugs at the scale of hundreds of millions of users (like a social media company), turn into bugs faced by hundreds of thousands of users.
But at that scale (in my experience), QA is up front and center and is typically a core pillar of engineering orgs.
> It starts mattering more for consumer software when you reach mass scale. Somewhat hard-to-find bugs at the scale of hundreds of millions of users (like a social media company), turn into bugs faced by hundreds of thousands of users.
From a cynical point of view, if those hundreds of thousands of users will use your product despite the bugs, does it matter?
People are only as loyal as their opportunities; if the competition is mostly the same as your product but has either fewer bugs or bugs in a less painful flow, buh-bye
I'm super cognizant that the cited example of "social media company" is its own special ball of wax, due to the network effect, and I wish to holy hell I knew how to fix that bug :-)
If you study coding at your university, you will be a stupid code monkey. However on proper universities you'll study much less coding, but Software Engineering or Computer Sciences, which teaches engineering and science. Not just QA, also mathematical foundations, writing proofs, writing science papers, writing compilers, doing AI, understanding HW, doing proper project management, with plans and tests.
Pity that he didn't go to a proper uni. Writing quality software is a solved problem, just not applied in most companies.
The problem is that while experienced software engineers probably do know how, and would even like to, management never approves enough time for doing so, and so here we are.
Unrelated to the article, I could immediately identify the image used as definitely AI-generated. But I can't identify any reason why. It's a normal picture of a stone brick wall. Yet I'm 100% sure it's AI.
No shame to the author for their choice; replacing stock images with generated ones is a great use case. It's spooky to me that we've been so quickly trained to identify this subconsciously.
True. I think in this case it's because of the texture of the bricks. It looks like they were wrapped in cloth or something. This seems a common texture in many AI-generated images.
I completely understand university teaching algorithms over real dev stuff.
Algorithms are hard, and take serious work, they're basically math. University seems like the idea place to learn A* search or quicksort, I can't imagine figuring that out without a lot of effort and time spent just focusing on that.
What I don't understand is why programmers themselves focus on algorithms over stuff like "Vague heuristics to guess if this JS framework you tear your hair out in a month".
That kind of this isn't really hard and doesn't require any heavy discipline and focus, it's just something you pick up when you've tried a few different systems and notice stuff like "Oh look, that linter caught this bug for me" and "Hey look, there's not much autoformat support for Mako, maybe I'll just use Jinja2 like everyone else".
Instead they tell us to do stuff like code katas and endlessly polish our coding and algorithm skills rather than practice working on big stuff and managing complexity.
Maybe it's because algorithms are the hard part, and being really good at that is needed to work on the real cutting edge stuff.
I suspect as with most things in education, they focus on small self contained problems because it's easier to teach and easier to grade. These toy problems end up being all students know and therefore are what employers select on.
I write Quality software. It's a matter of personal satisfaction. Many folks can't really tell the difference between relatively good software, and very high Quality software, but the cost difference can be quite large. We don't get rich, writing top-shelf software.
The issue is, in my opinion, that we don't even produce much "relatively good" software, these days.
I won't get into it. It doesn't win me any friends. I was fortunate, to find a company that shared my values. Many of their processes would absolutely horrify a majority of folks on this site, and it was, quite frankly, often maddening.
But you can't argue with results. They produced extremely expensive kit, for over 100 years, and they are regularly used on the ISS.
Thank you for sharing. I just read through several of your blog posts and especially resonate with your “evolutionary design”. The idea of integration tests/test harness first over unit tests makes a lot of sense to me too. As a one person team myself, the “art” of creating quality software products, at speed, is revealing itself and is quite fascinating.
It’s not everyday that devs like me get to learn from someone with as much experience as you have. Thanks again for sharing your knowledge!
But I am quite aware that I have a ton more to learn, and places like this, are a good place to do that.
I should get around to doing some more writing, though. I've been very involved in a project, for the last couple of years, and haven't taken the time to write.
Now that the project is approaching release, I may be able to free up some time.
I work in QA automation and also develop my own projects. There is more to quality assurance than QA. When you spend days creating something and then throw it away to make it better, that IS quality assurance! That deleted code, that deleted architecture, that deleted design is the cost of quality. A totally unappreciated aspect of QA.
I've worked on so many poor-quality projects. I've had so many heated conversations trying to teach people that there are no such things as rules in SE and it's important to _know when to bend/break the rules_. Rigorously following rules doesn't mean high quality, automatically. Some things simply don't exist in the little world you're building.
Translations, for example, usually live outside of any layered cake you are making. If you write a string that will be shown to a user deep down in your code, it needs to be marked for translation and translated. If you try to layer it ... gods help anyone who comes along to change that switch statement.
There are lots of other cross-cutting concerns that people try to put into a box because of "rules" that don't belong in a box. That's usually where your quality starts to plummet.
I think developing high quality software is more art than engineering.
The most useful pieces of code typically have to deal with some of the most ridiculous constraints. Determining a way to integrate these such that the UX doesn't suffer is the real talent.
The only pedagogical model that makes sense for me anymore is the apprenticeship. You can definitely learn it on your own, but it's a hell of a lot faster to observe someone with 20k+ hours XP and skip some of the initial lava pits. Perhaps some of those lessons are important, but learning and knowledge are not linear things.
It is the same as you learn about backup and restore process. And then you learn hard way that you should have been testing your backups, if they can be used to restore data.
While not a panacea, visual/snapshot testing tools like Cypress + Percy that perform clicks and take screenshots can be tremendously helpful to ensure that building in a programmatic QA plan is a predictable part of the scope of what is coded.
And the good thing about snapshots is that they provide a visual way to communicate a preview to stakeholders of what may change, both for the currently-being-developed workflow as well as to prevent visual regressions elsewhere - so they're inherently more easily justifiable than "we're spending time on testing that you'll never see."
The article is correct that treating QA as the last phase of the project, and a disposable phase at that, is a recipe for disaster. But if you make it an ongoing part of a project as individual stories are burned down, and rebrand it as part of a modern development workflow, then it's an entirely different conversation.
I have been, at various points in time, taught how to build quality software. I think the large majority of people I work with have been taught about this as well. So I'm not sure who the "you are never taught ..." is referring to here. Should it perhaps instead be "I was never taught ..."?
Were you really taught that, or were you taught cargo-cult things that don't make for quality software. I've had some of each in my past.
This is a big area that I wish researchers would focus on. What really does make for long term high quality software. What are the actual trade offs. How do we mitigate them. What are the limits to TDD - is it good? Detroit or London mocks - when to use each? When is formal verification worth the time and effort? There is a lot places where we as an industry have ideas but are in heated debates without knowing how to find a truth.
No, I have been, at various times, taught about a number of different techniques that are used in the endeavor to build quality software, along with lots of discussion about the tradeoffs between them and when they may be more or less appropriate.
It certainly is not something with a single easy answer, and I certainly agree that it remains a fruitful thing to continue researching, but that doesn't mean that there is nothing to be taught about it. There is lots to be taught about this, and lots that is taught about it, to lots of people.
My colleague, a senior full stack developer with CS masters degree has been asking: "why should I write tests if I can write new features?" And that is what managers often think. Because you can present new features to business but can't do the same with tests. They have no immediate value.
I was actually thought how to build quality software (which is not limited to "having no bugs") in college, but I do not have the time or resources to apply this knowledge consistently in a corporate setting because of the pressure to deliver.
Because frankly, too much quality is not necessary, in many many cases. To know when you should or should not emphasize quality over quantity and speed, to meet a certain financial objective, is actually harder than writing quality software in the first place, I think.
I agree in principle, but in my experience quality is not nearly prioritized highly enough. There is not enough understanding of quality attributes beyond the most visible ones like compute performance and stability (i.e. lack of bugs). And even for those I work on projects where people complain about lack of proper test coverage constantly, but it is impossible to dedicate time to improve that.
I'm pretty sure even those basic two, of performance and stability are extreemely undervalued, when you objectively look at how fast modern hardware is and yet how easy it is to find slowness simply in using devices in a day to day environments
This is false.
Its just that the costs of low quality code are much less obvious and harder to measure then the dev time.
But the ammount of bad code just piles on itself over and over and over and we end up in a world where hardwares becomes incrementally faster while software becomes slower and slower, and more bugier.
I mean, in the strict sense of the world an individual company will not pay those costs, but on a societal scale, how much time (and thus money) is wasted daily by all the people who are waiting 30secconds for windows explorer to load? If your app have millions of users, literally every additional second your app wastes multiplies to tangential numbers.
It's akin to pollution, really:
Individual company making 'dirty' things won't see the consequences.
But scale this mindset out and suddenly we wake up in a world when trillions of dollars are spend to counteract those effects.
I wonder where you get the confidence to make such a strong statement, which is clearly not warranted. I want to challenge you to broaden your view a bit: Not a lot of software is like Windows explorer. Not a lot of software is performance critical. A lot of software can do with bugs, with many many bugs, to be honest. A lot of code is run fewer than 100 times, before it's retired. Also, not a lot of software written has many users. Or enough users to support maintaining it. "Pollution" often affects just the author of the software themself. Software is just a means to an end, and the end decides, how much effort was warranted in the first place.
> Obviously we aren't taking about some simple automation scripts here.
This is moving the goalpost, and also ignores the fact that software exists on a spectrum from "simple automation script" to "messaging app used by millions". It seems you have a very narrow view of what software is, or what it is used for, and the constrains that apply when building it.
This is not moving a goalpost. Running a program less then 100 times total, across all its user, is just very little for anything that could be considered commercial.
That really isn't a controversial statement.
So I am simply excluding this category as an extremum.
What software are you running that gets less then 100 usages before it gets retired?
>Great aproach
Unironically better, then trying to make prescriptions as broad and general as possible, because those usually are too generic to carry any actual value
Yearly reports. They can be buggy, can be off by millions, due to rounding errors. They can crash. They can run for days. Nobody cares enough to rewrite them, because regulation will change before that effort amortizes.
Also note that I wrote "code" originally, because there can be programs which are run very often, but certain code paths are not, so my statement applies even for some parts of popular software.
The image I think would be valuable for you to consider is a curve, where 20% of code has 80% of all executions, and 80% of code get's the rest. It makes sense to put in a lot of effort into writing the top 20%, but on any given day it is very likely you'll be working on the lower 80%.
When I first started in the 1990s I was told testing was 60% of the product budget and traditional writing code 20%. The remaining 20% was architecture and other design work. We didn't have unit test frameworks, but we did spend a lot of time writing throw away test fixtures (which was already generalizing into test frameworks by great developers and kicking off the unit test revolution)
What the hell? Yes it is taught. How it is interpreted afterwards is very dependant on the team or community you surround yourself with. You get all the tools from schooling, how we use them is entirely up to us.
Almost no university teaches software testing in a competent way. For some years, Cem Kaner and I (he, a professor at Florida Tech; me, a proud high school dropout) ran a small annual conference called the Workshop on Teaching Software Testing. We specifically invited Computer Science professors to come and grapple with the problem. Thus, I personally witnessed and engaged with the sorry state of practice with teaching people about testing.
Cem developed a great testing class for Florida Tech, later commercialized under the name BBST, that drew in contributions from prominent industry voices. Otherwise, I don't know of any other University that has put the effort in. An exception may be what Tanja Vos is doing at Open University.
The problem I have with CMU is that they are the source of the Capability Maturity Model, which is simply incompetent work, justified by no science whatsoever, which tries to apply Taylorist management theory to software engineering. It never caught on despite all the marketing they've done-- it's only popular in military circles. I shudder to think what they are telling students about testing.
You can disagree with me, of course-- but in doing so you have to admit there is at least a schism in the industry among various schools of thought about what testing is and should be. My part of the field considers testing to belong to social science, not computer science.
> In addition, at least in my studies, there was a semester about project management approaches and scrum. All of which is great, but QA is missing completely.
Just an anecdote, but, we had a "Software Development" class like this in CS (I took it in the '90s) and even though it followed a waterfall development model[0] and we used Gantt charts, QA (testing) was a big part of it and 1 of our 4 team members (or maybe 2 of 4 worked on it together) was primarily responsible for it. (I wrote the parser and the made diagrams/documentation for the parser.)
The description (in an old catalog[1]) is:
Software specification, design, testing, maintenance, documentation; informal proof methods; team implementation of a large project.
Turns out I didn't need to look up the old catalog because the description is exactly the same still! Except it's CS 371 now, and the longer "Learning Outcomes" for the course has some newer stuff (agile and version control) but otherwise is all the same things I learned at the time.
In the Austria these topics are usually researched at universities of applied sciences (Fachhochschule) rather than universities (Universität). Some technical universities here attempt to combine these disciplines, to moderate success.
For example, the largest technical UAS in Austria offers Computer Science as an undergrad degree but only does research on Software Engineering at a graduate level.
Excuse me, I was taught how to build quality software, but I can't do it because the MBAs who run the company haven't been taught how to build quality software, and the board of directors who hired upper management were not taught how to write quality software. Most places I have worked the opinions of software developers were thoroughly ignored at best.
From one point I think it's fair for Uni emphasise on CS over SE because there's "truth" inherent in CS, but for SE, which deals with building software in real life, is more complicated because real-life is complicated.
Even for the QA advice in the post "writing tests as you write the software" one can argue it's infeasible/inappropriate for my type of project and/or with the people who work on this project.
So my two cents on this is let Uni teach the students *to be aware of* all the SE principles and best practices this industry now have, like the tools in your toolbox, also let us know that whether to use them in real-life projects, need assessment of situation, cost/value balance, people etc, afterall not everybody works on dream projects.
Berkeley has a class that teaches TDD and other XP practices. (Or at least used to.) Pivotal used to recruit a lot of new grads from Berkeley for that reason.
IME, “QA” doesn’t really correlate with quality software, nor is there really a time vs. quality trade off. Bad software is often also delivered poorly, and high quality software can be delivered quickly.
I went to a University with required Co-Op experience in industry. There was a great feedback cycle of students coming back to classes and writing tests and design documents for assignments that didn't even require them. Reading articles like this make me really grateful for that.
> At some point, I realized that I wasn't using the right arguments either. Explaining that the software will be 'more stable' or 'make maintenance much easier' is not palpable for someone who doesn't work in the codebase themselves. We need to speak about money. As developers, we need to speak about the cost of not doing QA. This is the language of business and managers in general.
How cartoonishly incompetent have we allowed managers to get that they can't connect stability to money on their own? If the stability concerned a bridge, would engineers also be expected to translate a potential collapse into monetary terms to get their manager to approve a higher grade of steel cable?
Making good (!perfect) software is a function of three constraints: knowledge, economic resources, and time.
You can mix those three together and produce a desired output, but don't expect perfection, perfect software only appears when the three variables tend to infinity
> To be realistic, it's important to not over-engineer QA measures with a big upfront investment. We shouldn't block progress of the project as a whole and neither will we get the necessary buy-in from all stakeholders with this approach.
I suspect that a lot of the bad code that is out there exists because teams are constantly in crunch time where it is important to get certain features out by a deadline.
From that perspective, this statement is kind of a contradiction. If every minute is vital to finishing a feature on time, then the act of writing tests will always block progress of the project
Basically, I just wish it were more innately profitable to write quality software
> If every minute is vital to finishing a feature on time, then the act of writing tests will always block progress of the project
So you will deliver that feature, it will fail for your customers, and you will be in panic-mode trying to fix it on live systems. Seriously, that's going to happen.
Finishing a feature must always include at least some minimal time for testing. That testing needs to be done by someone not involved in the development. Developers sometimes misunderstand requirements (or think they "know better") and they will test what they implemented, not what the requirements actually say.
I wrote a similar article: https://uptointerpretation.com/posts/art-school-for-programm.... I'd love to see schools that put programming first and foremost, that taught it with the respect that it deserves. There's this idea that programming is not worthy of a course of study, that programmers need to learn computer science and not programming. I disagree. It's a discipline unto itself and learning computer science to learn programming is really ineffective.
The whole article has a straw-man feel to it. It is not the senior developers responsibility to create a proper QA policy for the project. It is good that the lead should know how to implement good QA processes.
HOWEVER, the real culprit are the MANAGERS.
After 50+ years of skanky software development policy aimed at low balling cost its time to blame the right people. No amount of cajoling, "business/budget" speak manipulation is going to fix a fundamental flaw in how managers at that level are trained and behave.
We have stop being apologists for mistakes not of our making.
If using QA techniques would increase the managers bonus then we will see it being used. If all it does is make better software then this post will be rewritten in 50 years and would still be relevant.
Yeah the dimensions discussed in this article is somewhat advanced stuff and comes from experience, DRY is relatively basic concept and easy to grasp and unfortunately mid level engineer do really dangerous stuff in the name of DRY and horrible abstractions get created that break down and create a horrible mess when the requirements change.
In my experience it is generally wise to avoid abstractions and copy/paste things a couple of times, once the code base matures good abstractions will be more obvious. Even then it's good to think about future changes, will these 2 things want to evolve separately in the future? If the answer is YES, then maybe coupling them is not a great idea. I think there was a really good Kent Beck talk about coupling vs cohesion somewhere.
Another thing to think about is breaking things, if changes are local to one single endpoint then any changes there can only break that endpoint, edge cases and scenarios to consider are only relevant to that endpoint. When changes to a core abstraction are required then hundreds of use cases/edge cases need to be considered - why are we creating so many core abstractions in our systems in the name of DRY?
I've also found that the more moving parts you add the harder a system becomes to learn, the S in SOLID is probably to blame for that. The only single responsibility principle is useful for is unit tests (easier to mock), but many times harder to understand. If the actual functionality is not local to the file things become ungreppable via code search, understanding the entire system requires an IDE and jumping around to each and every beautiful lpad() implementation and trying to piece what is happening one 3 line function at a time.
Then there is also layering to consider, if 2 pieces of code look somewhat similar but belong to different layers (example controller and DAO layer, then also care must be taken to not make an abstraction that couples these together, or to couple 2 unrelated modules together that could otherwise have their own life cycle).
These are just some aspects I could think of that I think about when creating abstractions, but somehow I see engineers focus too much on DRY. Maybe they got burned so bad some time in the career by forgetting to change something in 2 places?
My university offered both a CS degree and a Software Engineering degree. While both open the door to tech jobs once in industry, the contents varied in the upper years. The software engineering one started with common engineering courses, then common CS classes, and finished with project lifecycle classes such as QA & requirements gathering. In contrast, the CS program started with a mix of core engineering and science classes, progressed to the common CS classes, then went further into advanced CS and algorithms classes.
The competitive CS schools where many of the students needed good sample code projects have an innate sense of building quality software. Usually because they know it'll be showcased, but also because they haven't gotten lazy with shortcuts or been overly managed to spend time developing instead of fixing. I thought it was funny the article referenced the famous umbrella monster. Here is the longer clip: https://coub.com/view/284lib
What they also don't tell you is that quality software can turn into a nightmare if you let some manager add a bunch of random requirements to it, freely, after the software has already been built.
A software business is made when someone needs an automated computation and buys a program from someone else.
The buyer has a specific requirement, which is distilled into a specification. The specification is implemented. Implementation that doesn't match with the specification is a bug.
Now in order to verify the implementation's correctness relative to the specification there must be a QA.
The idea that people forgets to add QA in the software development process is wild because it means people are forgetting how to conduct business.
I've found people with a computational physics background have a much better approach to QA. In that field "sense checking" calculation output is a part of the core methodology.
This is a side issue, but it seems to me like this general problem is what is making strict typing so popular. If there were tests and documentation in place and parameters were being checked then strict typing wouldn't have much benefit because types are just one aspect of making sure the whole system is working as intended. Instead in many cases development is chaos with little if any documentation and testing and strict typing is great because it is the only tie holding the bundle of sticks together.
So I'm gonna need to see dome data here. How many universities has this guy surveyed to make a claim like that?
My university did teach me design patterns, architecture and all. My professors did give me projects where requirements would change mid way and the software had to adapt without being completely rewritten. They did teache to write unit tests, to use a build tool to generate docs, reports, etc.
Was I battle tested for my first job? No, but it really wasn't as bad as what's described here.
I think there is a spectrum, it depends on the software it needs to be delivered and the need for business. A clear example are single player videogames, where the cost of automated testing is so high and the software has so little value after the sale that manual testing is a better option
I literally can't figure out how to buy the book from that site. I clicked ebook, then amazon, and the link is dead. You should probably just share a working link.
I've been a software developer for 15 years. I started at a start-up company with many others who were fairly junior. We knew we had to take QA seriously, but didn't know how to do it. It took many years to come up with standards, tooling, and so on, but it was totally worth it. Everything in this article rings true from my experience.
Testing and QA is great and all however there is something to be gained from a CompSci education besides making more money for the company. It can be enjoyed just for the sake of it.
Binary arithmetic doesn't make dollars for your bosses, but its fun. That doesn't make it less significant.
I didn’t learn how to build quality _anything_ until I dove into woodworking. Order, organization, process, care, details. You can _work_ with mistakes, but you can’t undo them.
I highly recommend anyone wanting to learn how to do something with care, try to pick up a skill that requires you to do it right the first time.
I did a bachelor in Software Engineering which is a bit more practical than Computer Science and while it really had quite a bunch of interesting practical subjects like communication, agile, waterfall etcetera there was absolutely nothing about QA.
Something which should be taught is the importance of coming up with the simplest abstractions possible and sticking to the business domain. IMO, if you can't explain an abstraction to a non-technical person, then you shouldn't invent it.
The process of building quality applications starts long before any code is written.
You need to understand the domain, you have to design something that solves an actual problem or delivers tangible value to someone, you need a holistic approach to user experience.
As someone who never studied computer science formally, I find this perspective fascinating - for me, programming is the tool to solve the problem. It often feels like folks study the tool. And the problem is important too.
Basic project management triangle stuff. We’ve been told by the biggest names in tech that first you move fast and money is practically bottomless. You get fast and expensive but quality suffers. The end.
QA/Integrations/SQM and how to avoid rejection was one of the skills taught hands-on AFTER university by the great masters who were unceremoniously forced into other occupations like: auto leasing, banner print shop, etc. when they were forced to crack open their 401Ks to make their house and family support payments in spite of the dot com bubble crashing the NASDAQ and related capX for six miserable years.
For example, at 16, my QA manager (whose husband had a Nobel Prize) and my tech lead (who wrote malloc/free for Unix and was learning VMS from me as fast as I could learn C from him) were totally opposite forces, separated by HR and different directors. QA didn’t code anything but batch jobs/shell scripts and contracted back to Engineering for any apps they wanted (or simply required them to be in the release package). If you wanted to keep your job, you didn’t submit programs with performance/functional deficiencies and definitely not logic issues as compared to the SPEC that all parties signed at the outset of the project (put that in yoh Kan Ban, Man rofl)
Ok another arrogant post from older former child prodigy! Happy Holidays!
Huh. I was taught this in Software Engineering (2 semesters) in my CS degree, which was a required "capstone", so to speak, and focused on how to build real world software.
Some of it can't be taught though, any more than a craftsman or artist can teach a novice how to produce what they do with the same result.
The processes and steps can be taught, but only through experience can some things be internalized.
I dunno about this authors CS program but at GT we had significant coursework related to all aspects of SDLC (including unit and acceptance testing), business case value, etc.
I don’t think it can be taught, at least not in a classroom. Apprenticeship and learning from wise mentors in real world environments is the best way to learn.
In my opinion, there's a tiny bit of nuance: it can for sure be taught, but to be internalized requires experience, likely being on the wrong end of something sharp (pager, company going out of business, or you yourself experiencing a bug that otherwise only a customer would see)
What if they ask you: and you? Are you actually building high quality software?
What are your quality gates? How many open bugs are there on production? How often you close bugs with resolution: won't fix(because of budget issues)? How often do you have production incidents? What is your testing strategy? How are you testing your requirements before assigning them to the developer for implementation? Do you hire external professionals for security testing?
I'll take a stab. Quality software is software that is testable, able to adapt to new features and is architected to match the current organizational structure of the software team so that communication and dependencies don't have an impedance mismatch.
- why is testable software higher quality? Does it add value to the software? I'd venture that untestable software has the same value (if not more) than testable software (due to time-to-market). You can write software that is 'obviously correct' and "high quality" at the same time, without any tests.
- Why does software that can adapt to new features increase the quality? If that is the case, we must argue that WordPress is extremely high-quality software. Or SAP.
- How does architecture influence quality? If that is the case, then there isn't any need for different architectural styles since there should be "one true style" that has the best quality software.
Testable software usually has a better quality because you can automate some parts of the quality assurance.
Sacrificing quality assurance to favour other aspects is common, but the quality usually suffers.
A company favouring time to market over testability is likely to release buggy software. They can get away with it.
Adaptability is a common quality, but you can find counterexamples. WordPress and SAP are successful software that may not check all the quality boxes.
Some architectures are for sure worse than others, and there isn’t one good architecture for all kinds of problems.
> why is testable software higher quality? Does it add value to the software? I'd venture that untestable software has the same value (if not more) than testable software (due to time-to-market). You can write software that is 'obviously correct' and "high quality" at the same time, without any tests.
Note I said testable software, not software with tests (there is a difference!)...I'd agree that software with tests (which is by definition testable software) has a huge developer cost to it that may not always be in the best interest of the company (like you said, time to market might be important). But in my experience, writing code in a way that can be tested later is only marginally more costly (time-wise) than writing code that isn't. A good example of this is writing modules that communicate with message passing and have state machines over direct function calls. The former has a slightly higher cost for dev time, but you can always retro-fit tests to it once you've achieved market penetration. You can't always do that with direct function calls.
> Why does software that can adapt to new features increase the quality? If that is the case, we must argue that WordPress is extremely high-quality software. Or SAP.
This is a good point that you bring up. I think what we are getting at ultimately is that quality and value are distinct entities. Software can have high value without being high quality. In my mind, being able to provide the business with new value-producing functionality without causing a spike in bug reports is my (admittedly vague) standard.
> How does architecture influence quality? If that is the case, then there isn't any need for different architectural styles since there should be "one true style" that has the best quality software.
Architecture has to match how the software teams communicate with each other. Like actually communicate, not how the org chart is made (see Conway's Law). So my point is then that if there are two separate teams, your code should communicate between two "modules" that have an interface between them. Just like real life. It would be silly to implement a micro service architecture here. That's why Amazon's SOA design works for them: it matches how teams are organized.
Good start, but too broad and open for interpretation.
- Who gets to define testability?
- I want to add a coffee maker to my crash test dummy; is the lack of room for the filter and water tank a sign of a bad design? Or not flexible enough for my feature?
- (cue meme) "You guys have organizational structure?"
- Who gets to claim the impedence mismatch? What are those consequences? Wait, where are the dependencies defined again outside of the software?
I do (just kidding!)...Testability is the ability to add testing at a later point. There is no hard definition of this, but if you can't test at least 75% of your public facing functions then I'd say you don't have testability. Remember testability means you can have a tigher feedback loop which means that you don't have to test in production or in the physical world. This means you get where you want to go faster.
> - I want to add a coffee maker to my crash test dummy; is the lack of room for the filter and water tank a sign of a bad design? Or not flexible enough for my feature?
I know you are joking, but imagine for a second that your business did in fact invent a brand new way to test crashes and that coffee makers were the key to breaking into that market. If the dummy can't accommodate that then...yes! It is a bad design, even if it was previously a good design.
> - (cue meme) "You guys have organizational structure?"
Remember: there always is an organizational structure, with or without a formal hierarchy. You want to match your software to the real one.
> - Who gets to claim the impedence mismatch? What are those consequences? Wait, where are the dependencies defined again outside of the software?
There are no "the company blew up" consequences with this type of failure mode. Instead you get a lot of "knock on" effects: high turnover, developer frustration, long time to complete basic features and high bug re-introduction rates. This is because software is inherently a human endeavor: you need to match how it is written to how requirements and features are communicated.
My entry: one that is easy to refactor regardless of code size.
If this is given, every other metric, like features, bugs, performance is just a linear dependence on development resources (maybe except documentation, but that is kinda an externality).
It doesn't help that early CS courses cover stupid examples like a recursive implementation of fibbonacci when the obvious solution is a loop. Tail call optimization is also another thing that should be reconsidered.
Plain imperative structured code is usually the cleanest and easiest to understand. Introducing abstraction too early I think confuses people and encourages complexity where it's not needed.
you can't teach software engineering before you have taught people programming. this suggests that programming should be a first requirement for a BS degree and "software engineering" should be MS and onwards.
This is a bit more abstract than the article but in my experience the best software comes from people with taste, rigour, reasoning, a good vocabulary, strong principles, and the ability to write clear and concise English (or whatever human language is used in their team.)
When you truly understand the software you are writing then, and only then, can you communicate it logically in code for the computer to execute and, much more importantly, code for a person to read. Well written and well understood code means it’s very obvious what you are doing. Later, when the code has a bug or needs to be rewritten, then it will at least be clear what you were trying to do so that it can be fixed or extended in some way.
So then the question is how do we train people to have these skills? In school, science experiments are a good way to teach logical reasoning and communication — here is what I thought would happen, here is what I did, here is what happened, and here is what it means. Math teaches you how to reason abstractly and, again, prove your point with logic. It’s a slightly different beast in that it’s harder for a math experiment to go wrong. It’s also harder to come up with and overcome novel scenarios in the lab with math, so in all it complements science well. And of course reading and writing English build your ability to express your thoughts with words and sentences. Many other high school subjects combine these in various measures — history for example is data gathering, fuzzy logical deduction, and reasoning in written language.
The bottom line is that quality software starts by working with well educated people and conversely all the most abhorrent heaps of over coupled illegible nonsense I’ve seen has come from people who, to be blunt, just ain’t that smart or well rounded, intellectually.
It’s a principle I carry over to hiring: smart and well educated wins out over pure-smarts.
Honestly I think the root problem is that universities have a degree in computer science, whereas what most people want is to learn to build computer software.
The two overlap most of the time in subtle ways where the science gives an important foundation, such as learning Big O notation and low level memory concepts where exposure helps. I've personally seen this with a smart coworker who didn't go through university and is great at programming but I'll catch him on certain topics such as when he didn't know what sets and maps were and when he tries to sleep a second instead of properly wait on an event.
However, the differences between computer science and building software are problematic. Watching my wife go through university, she's had to struggle with insanely hard tasks that will not help her at all with software, such as learning Assembly and building circuits. The latest example is the class where she's learning functional programming is not actually teaching it to her. Instead, they combined it with how to build a programming language, and so instead of giving her toy problems to teach the language she is having to take complex code she doesn't understand well that generates an entirely different programming language and do things like change the associativity of the generated language. In the end, she feels like she's learned nothing in that class, despite it being her first experience with functional programming.
On the flip side are the things that are necessary for software that aren't taught in university, like QA. For me personally, back when I was in university a decade ago I never learned about version control and thought it was just for back up. Similarly, I never learned databases or web, as the required classes were instead focused on low level concepts as Assembly and hardware. My wife is at least learning these things, but even then they often seem taught badly. For example, when they tried to teach her QA, instead of hardcoded unit tests, they made her give random inputs and check to make sure the output was correct. Of course, checking the output can only be done by rewriting all of your code in the testing files, and if there's a bug in your code it'll just get copied, so that kind of defeats the purpose. Even when the assignments are relevant there is often no teaching on them. For example, her first ever web code was a project where they told her to hook up 6 different technologies that they had not gone over in class, with only the line "I hope you've learned some of these technologies already".
I get that this is a blog post and needfully short, but yes, there are courses that teach these skills. There's a big disconnect between CS and SE in general, but it's not as bad as "no one teaches how to build quality software". We do work on this.