What is that problem in front of you? Gradient descent? Tree traversal? Multiple dispatch? Path finding?
What structure represents the data or algorithm? Ring buffer? Blocking queue? Bloom filter?
You rarely need to remember a pathfinding algorithm or trie implementation by heart. What's important is that you a) recognized the problem at hand as "path finding", "bin packing" or whatever. Terminology is important here. The good software engineer needs to know the proper names for a LOT of things. Recognizing and labeling problems means you can basically look up the solution in no time.
So CS is definitely very relevant for software engineering - but you need a broad understanding instead of a deep one.
There is always the argument that a lot of devs basically to monotone work with SQL and some web thing in node and rarely even reach for a structure beyond a list or map. That's true - but sooner or later even they bounce into a performance or reliability issue that's basically always due to incorrect choice of data structure or algorithm. I'm only half joking when I suggest that most of todays "scaling" is compensating for CS mistakes in software.
I'd have a hard time implementing my own crypto, but I've learned enough to know how to use it to secure communications, hide or protect information, ensure no alterations have been made to some arbitrary asset, identify an asset's source, etc.
I love working with a well understood and boring RDBMS. It's predictable and it lets you quickly move on to other problems. But you still need to have a good understanding of how it's implemented in order to store and query your data efficiently. If you have a poor understanding of how indexing works, you'll probably have a hard time selecting the right data model.
There's actually lots of fun problems in the frontend world. Try to write a multi-touch gesture responder, it's very tricky to get things right. How about a natural animation system that allows interruptions? CSS animations tend to look unnatural because they're largely time-based, and they don't handle interruptions very well. (Spoiler alert: springs are the magic sauce.)
Learning about compilers unlocks lots of powerful skills too. You can implement your own syntax highlighting, linter, refactoring tools, autocomplete, etc.
Yes. Or, it's actually rather easy - but it's extremely hard to do well. That's why one shouldn't do it (or, you should, but perhaps for toying with and not for production). See, you obviously know enough about the topic to know this! This is exactly one of those nuggets of "broad knowledge" you need as a software developer. How much do you know about crypto:
- You need to know when to hash ansd when to encrypt.
- You need to know the properties perhaps of symmetric/asymmetric crypto and key exchange
- You must know that you never implement any of these algorithms yourself, you only choose from them.
That's about it. You need to know what you don't know (how to write reliable crypto) in this case.
> If you have a poor understanding of how indexing works, you'll probably have a hard time selecting the right data model.
Right. Basically understanding a database is around the amount you need to implement a toy database. You know the difference between an index lookup and a scan and so on. Lacking that understanding means the database is some oracle (haha) you feed SQL and it spits out data. If you do know a bit more you might have a vague idea about how data sits on pages that are oriented into a btree. You might know that how the disk tree is magically updated in both ends to be consistent even if power is switched off mid-write and so on (I don't know, this really isn't my area - I have coded 13 years without a db). You didn't invent or even deeply understand any of these algorithms to the point where you could write them on a whiteboard. But it does help you if someone asks "what happens to users' bookings if I cut the power?" or "will it be faster to join in the DB or later in the language".
Another pet peeve of mine is people who can't identify NP /exponential problems. It happens several times per year that junior colleagues of mine develop solutions that are exponential in time/space, because that's what the problem is.
Them: "Look, I optimized the order in which we pick the X from
Me: "That will take the remaining life of the universe already with 30 items!"
Them: "Dang :( that took me two days to write"
Me: "Do it the dumb way and get off my lawn"
Everyone should write their own crypto library at least once. Nobody should ever use their own crypto library for anything. :)
One person gave me a step by step guide to how he broke it. It was amazing, and incredibly enlightening how hard encryption truly is.
Mathematical analysis of the encrypted data poor encryption easy to break.
This is validation, and in systems engineering (which software engineering both came from and fed back into before we, in software, forgot about it). It's part of the V&V (verification and validation) portion of system development. Verification means ensuring the system is correct with respect to the specifications and requirements. Validation means ensuring that what's being made is actually the correct, desired thing.
We need our software engineers to study systems engineering, where you will find formal methods being applied to the task of developing complex systems, and rather effectively at that.
Very powerful statement there.
I generally resort to Google and then go find the best approach and implement it if necessary.
I taught myself the basics of the underlying stuff (and it helps that I'm an older developer who grew up on Turbo Pascal and C since I do have a working knowledge of what the machine is doing underneath).
Those are rare cases though.
I've read plenty of implementations, but I never "studied" it thoroughly. And that's one of the reason I never went through an interview with a Big 5 company, where you're normally expected to implement i.e. a tree insertion algo on a whiteboard.
In reality if I need a particular data structure I just pull it in from the standard library.
You don't have to be a structural engineer to build a house ;)
But you need to know exactly what parts actually require a structural engineer or structural calculations. You'll be much faster at building your house if you don't have to think about when you call your engineer and when you can just guesstimate. And you are obviously re-using a lot of structural engineer work when you do (because you buy a prefab door where someone already did the calculations on the hinges etc.) Same with not needing to be a computer scientist to do most software engineering, but you need to use a lot of CS work done by others, and it speeds up your work immensely if you know what term to google.
Also, implememting these deep things (trees, linked lists, hash tables, etc.) mean you have a much better understanding of the tradeoffs you do when you use them. Trying to remember some O(N) numbers for various structures is much harder than just spending 1h making a toy linked list, 1h making an array backed list, and 3h making a hash table just ONCE and then you are set for life understanding the complexity of those things.
No, but you're going to need one on call.
I'm reminded of when when I was still living at home and my parents had an extension and garage conversion done.
Two builders did the whole thing, one in his late 40s and one in his 60s, and for the most part everything they did was just grunt work with very little need for craftsmanship. It's just banging together stud timbers, pouring concrete, digging holes, laying and packing store bought materials etc. Sure there's a lot of experience behind doing that safely and efficiently, but it's not rocket science and nothing a confident DIY enthusiast couldn't read-up on as they went along.
However there were 3 times when they had to call in experts. 1) a bricklayer (a surprisingly impressive craft if you don't want your house to look like shit). 2) roofers (you definitely don't want a roof laid by amateurs) and 3) a structural engineer to advise on (and to sign-off on) reinforcing a supporting wall that held up part of the new roof.
Software isn't that different. You need someone who really knows their shit for maybe 20% and the rest you can sort of palm off and can be done by any of ours peers with just a few years experience.
>> and recalls the algorithm to pick when presented with the
In my experience people able to pick the right algo straight away is extremely rare.
Broad knowledge could even identify several different approaches to the same problem, which can be compared on strengths and weaknesses even before trying one.
E.g for a nonlinear an optimization problem you might consider simulated annealing, genetic programming, gradient descent etc. and you need to know which is easy/hard to write, that gradient is good if you have (monotone) gradients etc.
And those didn't really need any requirements apart from basic math.
There is absolutely no reason for a software engineer to learn abstract algebra, infinitesimal math or any of the other dozen courses that you'll never ever use.
And even then, throughout my 10 years now, I can count on one hand the number of times I actually needed to use these things.
For example, a good knowledge of language theory will prevent engineers from creating scripting languages that cannot be parsed. Probabilities are everywhere in machine learning. Many engineers work with by-products of operational research and need to understand the theory behind them to make them efficient. Complex numbers and trigonometry (quaternions etc.) are needed to build even basic 3D engines. Recent probabilistic data structures such as hyperloglog are being integrated into modern database systems. Good understanding of operating systems is useful for security and parallel programming...
You mentioned a few here, I suppose, there could be a full list already somewhere. :)
Absolutely. A general understanding of CS is necessary to be a competent software engineer. Just like a general understanding of physics is necessary to be a civil/mechanical engineer.
If there is a VENN diagram, there is definitely an overlap of "theory" and "engineering". But theory != engineering.
I vigorously contest the idea that software engineering cannot be rigorous and so shouldn't try.
There are six thousand types of programs (as a wild guess), and they all interact with each other in an exponential explosion of complexity.
For a formal method to work, it has to be generally applicable across a wide range of situations. There are methods like that in software engineering, and you see them in situations where the program is potentially life-threatening. But most programs would be hindered by this rigor.
When programming in a well-typed language, everything we do is a degenerate proof of something. I think it's entirely reasonable to try and make it easier to make less degenerate proofs, and encourage the re-centering of proofs as the atomic unit in programming, because they are for programming (as in math) a well-defined, solid formal system that solid systems can be built on in a way that nothing else can be in software, at least thus far.
In a world where software flows more freely than water, the correctness and reliability of software systems must be taken with utmost seriousness. This applies to basically any tool that contacts end users, and a lot of ones that don't as well.
What do you propose as the theoretical basis for the engineering science of software development?
It's also self-evident that in most situations, correctness and reliability aren't a concern. The counterexamples account for maybe 1% of the field. 99% of the time, if your program breaks, you can pay someone to go fix it and no serious harm was done. Even Github outages, which affect almost all of us, hardly matter.
Yes. Writing software is not math. Does it use math concepts? Of course, but so does just about everything else.
While we're at it, writing software is also not engineering, even though there are engineering concepts that can be applied.
Writing software is its own thing. Pretending that it's something else (like math or engineering) invariably leads to category errors.
Think about food recipes. They certainly use measurements and timing, and engineering principles can certainly be applied (especially when executing them on an industrial scale) but they are not math (or engineering) either. Examining the measurements, timing, and the production chain doesn't tell you anything about whether the recipe is delicious or inedible.
Arguing that a piece of software should be "proven correct" makes about as much sense as arguing that a recipe should be "proven correct". You might as well judge the recipe by the standards of poetry ("Does it have evocative imagery? Does it rhyme or alliterate well?").
People have been chasing the unicorn of software correctness proofs for 60 years, with a notable lack of generalizable success (there are plenty of toy examples, of course). What usually happens is that the "programming is math" people come up with some bizarre academic language that no real-world programmer would use unless forced to do so at gunpoint (followed by the new language sinking without a trace). Alternatively, the "programming is engineering" people come up with some baroque formal process that requires you to write a 500 page document and get six committees to sign off on it before you can write "Hello, world". I'm old enough that I've seen these things happen multiple times.
>It's also self-evident that in most situations, correctness and reliability aren't a concern.
I wouldn't say they aren't a concern at all, but if you're wasting six months screwing around with formal proofs, UML diagrams, or things of that nature, while in the meantime your competitor is iterating three or four times, that is definitely a concern. Operate that way and your milkshake is going to be drunk, yo.
You're wrong, and I've explained this upthread.
>Think about food recipes.
Do you have experience with non-imperative programming paradigms? I'm sorry to say that the comparison to recipes in this context seems fairly naive.
>People have been chasing the unicorn of software correctness proofs for 60 years, with a notable lack of generalizable success (there are plenty of toy examples, of course).
Static type systems are arguably a product of this, especially advanced ones like Haskell's.
>What usually happens is that the "programming is math" people come up with some bizarre academic language that no real-world programmer would use unless forced to do so at gunpoint (followed by the new language sinking without a trace).
Now you're just being anti-intellectual. The whole point is that the real-world programmers are stumbling into all this crap constantly without realizing it. It's completely fucking unavoidable and very much tied up with the fact that programming is math. The only question is what you choose to do about that: learn the math, or stay ignorant.
I don't mean to be condescending but I find comparing UML diagrams and proof assistants a little offensive (among other things), and it suggests you don't know what you're talking about. Modern proof assistants and other formal techniques like dependent typing happen while you're programming (Idris/Agda) or generate programs themselves (Coq), they aren't some sort of Waterfall-ish thing where you have to deal with all the ceremony before you start to get shit done. On the contrary, you get shit done, and it works better when you're done with it too.
That aside, there seems to be something missing in your analysis, or there would be a lot of successful startups stealing the market using proof assistants and formal techniques. It's not anti intellectual to point out that programmers don't want to use academic languages that have poor usability, steep learning curves, and garbage for standard libraries.
Consider when a regular house is being built. There are many problems that could be avoided if an engineering firm spent an entire year analyzing the designs and their interactions with the target site. However, that's not done because it takes significantly longer and costs obscene amounts of money.
It's easier for the construction crew to fix problems as they encounter them and for the owner to do repairs on the house 40 years later. Yeah, the house isn't as reliable, but it cost 100,000 instead of 1,000,000.
However, it's worth pointing out that you don't magically get the final program out of the TLA proof. The proof only works for the abstraction level that you chose to write it at.
There is also a certain level of compromise. By relaxing the requirements a tad, you can still gain many of the benefits while maintaining the light and nimble feel.
So why don't startups use this? Well because there are entrenched technologies that make it very difficult that have nothing to do with the merits of the approach itself.
It is up to us, as developers to take the charge and push for these techniques through open source development, advocacy, and training.
But to claim these techniques are a failure simply because startups aren't using them is pretty ridiculous.
I've never seen that used in the wild (though I'm in the wrong domain (web) these days).
> You're wrong, and I've explained this upthread.
I suspect there is a difference in semantics, here. Software is inherently mathematical, yes. But the practice of writing software is not the practice of doing math.
The output of doing math is proofs. The output of writing software is...something that does something when run on a computer, hopefully interesting, meaningful, useful, or entertaining. In the vast majority of cases, we have not and will not need formal proofs of correctness for software to achieve these things.
If I want a blur effect on some portion of a UI, and choose to implement that with a Gaussian blur, what value is there in formally proving that a specifically Gaussian blur has been applied? All of this is inherently mathematical, but that doesn't imply a need for mathematical proof.
Here I also thing that 'Turing_Machine is both wrong in details and correct in the general point with their recipe example:
> Examining the measurements, timing, and the production chain doesn't tell you anything about whether the recipe is delicious or inedible.
You could, in principle, apply the knowledge of medicine, chemistry and biology, coupled with process engineering and wide-scale people studies, to construct a theory of tasty foods, which could lead to the situation in which you could evaluate any recipe on a theoretical basis. But getting to that state would require tons of up-front work to be done (some of which is being done for unrelated reasons, so maybe in the future a "food theory" will assemble itself) - and in the meantime, getting a piece of tasty food is done much faster and cheaper by finding the solution instead of deriving it. This search is done through iteration.
Similarly, in software, 99% of the time we find a solution, not derive it from first principles - because the former is much cheaper when we care about the solution, and not solving the entire general class of a problem at the same time.
The reason for all the confusion is that programmers are already doing math. They just don't realize it and reinvent the wheels invented by the math community in the past century. It's a matter of semiotics.
Some aspects of language design are reinventing wheels invented by the math community. This is far from constituting the set of "writing software" or "software engineering".
But assuming you're right, I'd like to know - what mathematical wheels am I reinventing in my dayjob of building UIs that let people click up some stuff that later gets put in business-specific XMLs?
The XML as a vessel of human-knowledge is limited. Good intentions have brought it OWL/RDF, XML Schema, XSLT; examples where others before us have tried to extend the XML into the domain of semantics and algorithms. Nevertheless, it was found that, without an expressive type system, large and complex business domains cannot be modeled. Apparently, in order to model abstract business domains, we need a language that composes both high- and low-level with near invisible seams.
So, that click-your-XML-application might benefit from a reflective logic, enabling the user to explore the possible state-spaces. If your app uses relational algebra from DBMSs, it might be able to combine the relational algebra with the algebra defined by your schema's. The UI state-space and the XML schema might be an isomorphism, which helps prove completeness of your UI-builder implementation.
Above all, the mathematical way of thinking helps reasoning, communication and correctness. It might not be the only way or perhaps the way is dated. Nevertheless, ignoring math as a programmer, feels like ignoring music theory as a musician or linear algebra as a structural engineer.
> ignoring math as a programmer, feels like ignoring music theory as a musician or linear algebra as a structural engineer.
We're not ignoring math.
It's like you read the article, and set about disproving it, without ever really understanding it.
No. I'm not.
Are formal math and engineering useful in cooking food? Yes, they can be (particularly if executing on an industrial scale). Are they necessary? Not really. Plenty of great cooks just throw in ingredients in the amounts that seem right to them, perhaps tasting the result once in a while. Are they sufficient? Nope. If the best mathematician and best engineer in the world collaborated, the result might be edible or it might be an inedible mess.
If neither formal math and engineering are necessary nor sufficient to produce good cooking, we can safely conclude that cooking is not math or engineering.
> Static type systems are arguably a product of this, especially advanced ones like Haskell's.
Haskell is used by, to a first approximation, no one. Which was my point.
> Now you're just being anti-intellectual.
No, I'm not.
> The only question is what you choose to do about that: learn the math, or stay ignorant.
I was one math class away from getting a dual BS in math and CS in undergrad. Rather than stick around for another semester, I took the BSCS and went to grad school.
You can safely assume that I "learned the math", and that I am not "anti-intellectual".
The point here is that while, on the most fundamental level, the universe may indeed be made of math, that doesn't mean that treating everything with the math toolbox is the best way to proceed. Expecting that math methods will produce great software is a fundamentally goofy idea -- just as it would be to expect math to produce great poetry, painting, architecture, or anything else (and the same for engineering).
Bingo. Good thing we're talking about programming, and not cooking.
I asked if you understood what you were saying because you can't really cook declaratively, recipes are inherently imperative. The comparison to programming thus only fits for imperative languages.
>Haskell is used by, to a first approximation, no one. Which was my point.
If that's your point, then I agree. Not sure how that's in disagreement with my points though.
>No, I'm not.
So what then were you intending to convey by vague references to incomprehensible academia? Surely you weren't meaning to imply they're just wrong, were you?
>The point here is that while, on the most fundamental level, the universe may indeed be made of math, that doesn't mean that treating everything with the math toolbox is the best way to proceed.
Yes, but such a general claim is not what I'm arguing for.
>Expecting that math methods will produce great software is a fundamentally goofy idea -- just as it would be to expect math to produce great poetry, painting, architecture, or anything else (and the same for engineering).
Well, it of course depends on what you mean by "great" software. But I still think you're missing the point here. Computer science is a lot closer to math than poetry, painting, and architecture are. There is a direct, elegant, simple, formal correspondence between programs and proofs. The same cannot be said for those other disciplines.
My only claim is that proofs are slightly more solid intellectually and formally speaking than programs, so converting more programs into proofs will make easier to reason about, and since programs can be easily converted into proofs (relative to poems or architecture or paintings or whatever) that this is probably a good idea. I still don't understand what your objection to that claim is.
I am not comparing recipes directly to programming. I used recipes as an example of something that has mathy and engineery facets, but that is not engineering or math.
> So what then were you intending to convey by vague references to incomprehensible academia?
I wasn't making "vague references" to anything, nor did I say that academic languages were "incomprehensible". I did say they were bizarre, which is a different thing entirely.
I'm not sure why you find it hard to believe that someone could understand academic languages of the sort you evidently prefer, and yet somehow still choose not to use them. Your attitude seems to be that anyone who doesn't use your preferred methods is "anti-intellectual", "ignorant", or any of the various other personal insults you've used.
Why not just go off and write some awesome software using your methods? If they work as well as you claim, you'll have some hard evidence to back up your assertions
>There is a direct, elegant, simple, formal correspondence between programs and proofs."
You are defining great software as "software that can be proved to behave in accordance with some formal spec", while people who actually use software (i.e., the people who pay the bills) define great software as software that performs the task they need to have done, can be written economically, and that is easy to use.
By your definition, a great recipe would be one that came out exactly the same every time, even if it tasted like shite, or took three weeks to make, or...
> If that's your point, then I agree.
You are agreeing to something that is false. See my reply to the grandparent.
Unless you have a very loose definition of"no one" that simply is not true.
Just off the top of my head:
Haxl at Facebook
Bond at Microsoft
Supply chain management at Target
What part of "to a first approximation" was unclear?
Haskell is in 47th place, which is consistent with where it ranks in every other popularity list I've ever seen.
I'm standing behind "to a first approximation, nobody".
Note that I'm not saying that Haskell is a bad language, or that Haskell programmers are bad people or anything like it.
I'm saying that the vast majority of programmers do not use Haskell. An anecdote about a particular group that uses Haskell (or even several groups) does nothing to refute that fact.
I think you're dismissing the post without engaging its arguments, so it hardly seems fair to start calling people naive, anti-intellectual, and ignorant. (btw, we all get that these are different ways of calling someone stupid, which is never productive and isn't justified in this case.)
To attempt to engage your argument, as far as I understand it... i think type systems and programming paragidms, however formal, can at best solve problems only in a corner of the problem set of software development. The limitation is because these do not take into account various kinds of constraints on software systems which nevertheless exist and are often the dominant constraints, depending on the project... Requirements, maintainability, usability, estimation, etc -- most of the stuff above the red line from the article.
I like Idris, and there's room for these languages. I think you give them too much credit though, programs written in them still have bugs. Your spec can be wrong. But above all else, they can't help you with scale, performance, recovering from hardware faults, and delivering what users wants.
The languages are still new, they'll gain traction, and for certain use cases they'll make sense, for others they won't.
"Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give a convincing proof of its correctness."
"Beware of bugs in the above code; I have only proved it correct, not tried it."
For example, the Single Responsibility Principle (SRP) is primarily concerned with making software more manageable on both an individual and team basis. How? By minimizing:
1. Communication overhead between teams/modules
2. Information overload in an individual
You could look at #1 from a graph-theoretic and information basis. I'm sure there are many interesting things to prove there. Like Amdahl's and the opposite of Metcalfe's Law  applied to team/communication/module dependencies. Just as a basic example, if one class has 10 responsibilities shared by 10 engineers with no boundaries specified, then the probability of conflict and unintended consequences rises. Thus the rate of development slows.
As for #2, applying more rigor and proof to the question of how these principles help a human understand quicker.
I don't have the necessary background right now to explore this in more detail, but like you, I'm very interested in any possible formulations.
I am not a full subscriber to SOLID. I think it promotes a certain kind of degenerate over-abstraction that leads to bugs of a different nature, premature decisions on what needs future substitution, and decreases agility in the medium term.
But the actual practice of doing mathematics (i.e. coming up with novel proofs) works exactly like this as well. The differences are in how much confidence we have in the results, and how much we're able to reuse them.
> It's also self-evident that in most situations, correctness and reliability aren't a concern. The counterexamples account for maybe 1% of the field. 99% of the time, if your program breaks, you can pay someone to go fix it and no serious harm was done. Even Github outages, which affect almost all of us, hardly matter.
I agree that we collectively don't currently care as much about correctness as we pretend we do. I believe software correctness is becoming more important (e.g. the rise of ransomware) and is going to become much much more important, but that's due to what I accept is a non-mainstream view of the future.
If those examples are 1% of software, they take disproportionately more funding than 1%.
And I'm sure customers of software, from personal computers, to business apps, to corporate websites would appreciate more consistent results as well. Reckless software engineering has damaged the reputation of the field as a whole.
Sounds a lot like engineering/design/art. There's a large amount of overlap between the three.
I had the pleasure of speaking to the principal electrical engineer whose firm was responsible for designing the entire electrical system for One World Trade Center in Manhattan and the Citi field as well.
He described his work a lot like a software developer would describe their own. And then, in his words, when his company was done they "passed the plans over to the electricians to build it".
It just so happens, in our profession, we've automated the part that electricians are responsible for in his project. Our electricians are called compilers. They dutifully carry out the plan and from time to time they surface warnings/errors back to the engineer for input/correction.
People conflate the meaning of software development because one person shares so many different responsibilities, that are usually handled by separate people in other professions: architecture, design, analysis, implementation, maintenance, etc.
What a civil/electrical/mechanical engineer does isn't so different from what a software engineer does. It's a "scattershot haphazard endeavor that involves trying dozens of angles [based upon guiding principles] until one of them works" usually run through loads of simulation and analysis. Now, the difference is, when these engineers want to bring their idea into the world it takes physical labor, unlike software development where the feedback loop is near instantaneous. We bring our programs into the world with compilers and can run them immediately.
I mean, there are plenty of examples where civil/electrical/mechanical engineers failed in their design and thus created a bug in their project. See  or any contemporary CPU from AMD/Intel or automobile recalls or spacecraft failures, etc.
There are certain principles of software engineering that lead to more effective software. That's a fact. And the companies that understand this fact will not fail to startups, in fact they exhibit a severe technological advantage.
Google and Facebook are two companies that understand software engineering. Talk to the YouTube/Instagram teams and see whether they felt better off -- technologically -- before or after acquisition.
I don't think this is self-evident whatsoever. One of the problems in software engineering is that we never know when a solution is "right," we only know when it's vaguely not-wrong, and even then almost everything we create is still subtly wrong in a way we forgot to think about. Math is solid because there is certainty in the correctness of certain proofs, and these can be used as building blocks for further results. Philosophically speaking this is the closest we're going to get to "real" engineering practice in software, and we're pretty dang far from it if you know what that looks like.
>Software dev is generally a scattershot haphazard endeavor that involves trying dozens of angles until one of them works.
But this is a symptom of any pre-scientific field in engineering, it's not specifically endemic to software engineering. The main difference is that in software engineering we're applying math directly, and in other engineering fields we're applying science. In a certain sense, the natural sciences are purely empirical (modulo advanced physics of course, which overlaps with pure math and philosophy these days) while math and consequently computer science are purely rational. So, what are the solid elements in math? Proofs. Proofs are also how you ensure that the field actually moves forward and you're not stuck recreating solid results that already exist. Sound familiar? It happens constantly in software, but imperative and poorly-typed languages inhibit composability and reuse because they lend themselves easily to extremely specific solutions to general problems.
>It's also self-evident that in most situations, correctness and reliability aren't a concern. The counterexamples account for maybe 1% of the field. 99% of the time, if your program breaks, you can pay someone to go fix it and no serious harm was done. Even Github outages, which affect almost all of us, hardly matter.
This is preposterous. Just because lives are not in danger does not mean that it's not worth doing right by the solution, and frankly I think a lot more lives are waiting to be put in danger by crappy software with bad security than you realize. It's already become almost dogma that all software engineers need to have a deep understanding of security, and I don't understand why correctness can't fall under that rubric as well. If we're going to be having Geohot or even Google or whoever building self-driving cars for possibly billions of people, enormous critical infrastructure projects coming under computer control, etc, the whole industry from education on up is going to have to have a serious attitude adjustment to keep up with the demands of safety, security, and reliability.
It might not matter 99% of the time, (I think it's a lot more than that of course) but we need to make sure that we can as a discipline deliver that 1% when it is absolutely critical, and as of right now it doesn't seem like we can.
FWIW, we agree on this point. But it seems worth treating this 1% case as a separate discipline rather than trying to lump it together with software dev. No one would claim that NASA's software engineering is the same type of work as, say, writing a new HN feature.
It's already become almost dogma that all software engineers need to have a deep understanding of security
The most secure programs are those that undergo frequent penetration tests and have bug bounty programs. Speaking as a pentester, I think there's not much chance of regular software devs being any good at security. There's just too much to know.
I want to agree with your other points, because in principle it's the correct thing. Unfortunately experience has taught us that the correct thing usually loses in the real world. Being first to market mattered way more for Ethereum than the fact that the DAO had a bug in their smart contract, for example. But there are hundreds or even thousands of examples of this type.
If we pretend that a teenager hacking in their bedroom is doing something fundamentally different than what most developers do each day at their jobs, then we lose out on the ability of that teenager to innovate. We become an exclusionary clique rather than an inclusive group. Luckily market forces still prevent us from becoming that insular, but in the era of walled gardens it's easy to imagine we're not too far off from that fate.
The main issue is that if we try to restrict the free market e.g. with legislation, then the important work will simply move overseas to areas without those restrictions. And unless you're proposing legal restrictions on the software dev trade, it's unclear how to enforce any of the proposals upthread.
Well no, but the same underlying principles are still operating. The only difference is how much you care about heeding them. You don't need an aerospace engineering degree to make a paper airplane or a short bridge or a raft or whatever, but that doesn't mean that your knowledge of how to do so reliably and correctly wouldn't be improved by such a degree, or that if you're going to sell a product that you hope people will pay you for and subsequently depend on that you shouldn't bother trying to make it functional (in the "it works" sense) to the best of your abilities.
>If we pretend that a teenager hacking in their bedroom is doing something fundamentally different than what most developers do each day at their jobs, then we lose out on the ability of that teenager to innovate.
Ok, that teenager can hack, sure, just the same as they can build a two-stroke engine or an electronic alarm for their door or play around with nuclear fusion or whatever. But if they're going to sell those things and make claims about their safety and reliability, the validity of those claims should be enforced by an industry guild accreditation program or legal regulations or whatever. There's more than one way to skin this cat, but it really needs skinning.
>The main issue is that if we try to restrict the free market e.g. with legislation, then the important work will simply move overseas to areas without those restrictions.
The goalposts are being moved here, though, near as I can tell. I told you what I wanted done and why I thought it would work, and now you're telling me I have to figure out how to do it specifically in such a way that it can't be circumvented. I haven't thought up a specific solution to this question you pose, and as such I would point you in the direction of how liability works in other disciplines for similar cases and so on. I do actually think a lot of the same systems could work for enforcement, the biggest difficulty is actually figuring out what the principles should be. If the biggest problem with enforcement is that teenage hackers can't innovate anymore, I'm not really all that concerned. Romanticizing that image does nothing to change the hard realities of the industry.
Not really. The time scales are completely different: that HN feature that is schedule a week of dev time is incomparable to that probe feature that is scheduled years of dev time even if the amount of code involved is the same. You could argue that both are "just programming", but the activities and process involved in both are going to be completely different.
You missed the point though. Whether it takes you five minutes to code the feature or fifty years, you're still doing math. The so-called "engineering" principles might change, with regards to division of labor and so on, but the underlying science/math you're dealing with doesn't significantly.
In engineering the time it takes to implement the design/proof/code/output is one of the constraints. In math this constraint is ignored.
That all contributors don't have the capability to formalize their output has never been a viable argument against formalization.
Before music education was formalized music was a mysterious craft you could only learn by learning with a master for a decade if you weren't incredibly talented.
Once the orphanages of Naples formalized music teaching in the 17th century it became something much easier to learn and teach.
Yet, music teaching did not eliminate the capability of non-formal craftmen to innovate. They just have collaborators to help formalize their thought process.(John Lennon did not know music theory and still made awesome stuff. But without formal theory his output could not live on in sheet music, and it would
be a lot harder to reproduce it).
Even Einstein needed help with his math. But without math, theory of relativity would not have been much of anything.
It's all fine to imagine one is traveling in a traincar, but once one needs to compute, say, the orbit of mercury all need formal methods.
Creativity and formalism go hand in hand. You need both for a superpowered discipline.
I have over 30 years of experience and I couldn't agree more. The security field - heck, the website security field - is way too complex for me to navigate it properly. Sure, I know about SQL injection, but "authentication is not authorization" is something I tend to forget, and I am pretty sure I have no clue about at least half of OWASP top ten.
This way, I think formalizing software development is a worthwhile goal - just like formalizing cooking is - but it's also obviously so uneconomical that we can't expect the industry to bother with it. Formal methods are pretty much basic research - not useful for us in any meaningful timeframe, but hopefully our grandchildren will get some mind-blowingly amazing tools out of it.
It is a way to solve mathematical problems,
See "Guess and Check" from this book
Maybe in an agile web startup. SW engineering in a big industry project is a very deliberate process. You start by a requirements engineer writing very precise requirements. Then a SW engineer turns this into a module design and a test specification. These are turned into source code (for which we use code generators to an ever higher degree) and test cases by yet other SW engineers. In the end everything gets reviewed and/or tested.
Trying things until they work is just not done. That would be way too dangerous for SW that controls cars, airplanes, nuclear power plants, rockets etc.
There are hugely more systems not written to that standard.
So software engineering isn't engineering, either? I would imagine any civil engineer who actually knows what it takes to build a bridge would tend to agree, at least on the basis of the points you have put forward so far.
I bet if we fast forward 1000 years, you'd find that software engineering will have become as rigorous as what we have today in civil engineering.
How is that not like software engineering?
But in the software world, it seems the equivalent of the cold is a CRUD app, but still fails so often that it's newsworthily talked about when such a software project succeed!
This isn't evident at all. In software, every situation is novel, so there is no room for repeatable processes.
You can build a "bridge library" once and tweak it for each situation, but you never need to develop a rigorous process to build that library again.
this is the point i m contesting - that most situations aren't as novel as the stakeholders think it is. The failure happens because the assumption that it's novel is there!
I can potentially lay down a bunch of assumptions, and prove that, given those assumptions, my program acts correctly. However, most of the bugs arise from incorrect assumptions.
One concrete example: When feeding pixel data to the onboard hardware h264 encoder's driver, after having set a resolution of 1920x1080, the resulting h264-encoded frame will be displayed in a h264 decoder with a resolution of 1920x1080. This turned out to be wrong, because the hardware can only deal with blocks of size 16x16, so the width and height must be a multiple of 16, and the driver isn't smart enough to add the necessary metadata to make a decoder crop it, so there's 8 green (because YCbCr) pixels on the bottom of the video. The solution to this is to manually splice the h264 bitstream, to insert my own metadata which has the correct cropping. How the fuck do you formally prove any of that?
Well you basically answered your own question - you would need a language with a more sophisticated type system. You would then need an API/DSL to interact with the graphics hardware.
I suppose that life is irregular.
Not OP but all pre-scientific crafts from medicine to construction have benefited considerably of application of formal methods. I don't see why software engineering a
should be different.
Anyways, surely SE as a field benefits from formal methods like other fields, whether applied by specialists or in the field by practitioners we can debate, but hopefully we can all agree SE isn't defined as just applying formal methods, that there is a lot in development that will never go near a formal spec.
I've yet to see a field without artificial restraints that has abandoned need for human intuition and creativity.
>they are for programming (as in math) a well-defined, solid formal system that solid systems can be built on
Except you forget one big thing. Maths do not deal with errors or real world complexity. Maths exists by themselves in a world where errors and failure at other level do not exists. And i will not even talk about Gödel incompleteness or Turing proof of the the halting problem as other fundamentals mistakes here.
> can be built on in a way that nothing else can be in software, at least thus far.
What about System Engineering? Because guess what? Other complex systems exists in the Nuclear, Space, Chemical or Aviation industry. And they are not developed with formal proof... and they still do work... But they use Engineering, proper one. Not wishful thinking and all.
>In a world where software flows more freely than water, the correctness and reliability of software systems must be taken with utmost seriousness.
There is a really really small but essential mistake here. Two to be honest. The first one is that any sufficiently big software can not be proven correct and reliable. That is what Complexity Theory tell us.
But most importantly, it does show a complete lack of knowledge of the meaning of the word reliability and the research in Complex System. Reliable and Correct systems are inherently dangerous. What you want is a system that is SAFE. A system that is Safe, is a system that accept to do the "wrong" thing if it is the safe things to do.
>What do you propose as the theoretical basis for the engineering science of software development?
Complex System. SNAFU catching. Stop trying to ignore the stack we live in and begin to think about runtime instead of over complex architecture. Safe guards and operators. Debugging as a first class citizen. Bulkheading and recovery. In general, System Engineering and Human Factors.
You can begin with reading web.mit.edu/2.75/resources/random/How%20Complex%20Systems%20Fail.pdf then follow with Nancy Leveson free book https://mitpress.mit.edu/books/engineering-safer-world .
PS: oh by the way, to everyone saying that other Engineering discipline are based on rigid step-by-step methodology that enforce correctness. It is not. It is even more a mess with slower feedback loop. I worked in other engineering fields before IT and it is not really better. It seems to be really American to believe in the "scientific method". But that does not exist. The world outside is messy and complex.
Try accepting that. There are solutions, but by living in a dream, you are working around the bug instead of fixing it.
Computer programming is not a special snowflake. It's a field that (in many areas) refuses to grow up.
It's important to keep in mind what "growing up" translates to: slowing down. This has both economic and competitive implications. Sometimes it might be a win, but the vast majority of the time your ability to move quickly (and yes, occasionally break things) is an advantage.
For example, a program must run on a platform that doesn't have everything formally defined or have guarantees. If a program relies on that platform and that influences the program in a way that makes the constraint reliant on the platform, then you are out of luck.
Another issue is choosing the constraints that the program will operate under. You would hope that the clients would give you the requirements including the constraints the software must operate under. But this isn't often the case. Having an incremental approach which includes clarifying requirements by mapping what the clients need to do to get their work done, which includes recording the specific steps that they perform or need to perform works pretty well when clients don't provide the requirements or constraints upfront.
I am relatively ignorant on formal engineering, but I do like that you have provided a fairly clear definition, which was "Clearly define the constraints in which your project will operate under and prove that they will hold under those constraints". It is better than the people saying know more math, without saying how, or what, or why it is useful in any clear way, and saying software is math just doesn't cut it.
Choosing the constraints for a project is no different in the context of CS than the context of any other fields -- electrical, mechanical, etc. It is a conversation, as you stated. This helps in many ways: it helps the developer understand exactly what the client wants, helps the client understand exactly what they are going to get, and helps build an implicit timeline replete with discrete and obvious deliverables and (if done correctly) those deliverables are fairly modular.
Thanks! I, too, get frustrated that people just say 'math! math! math!' That isn't helpful or meaningful. This is a conclusion I came to after talking to quite a few of my friends and colleagues in different engineering disciplines.
A little off topic, but it would be great if there was a method/system which could provide requirements tracking down the set of artifacts to where you could point to a bit of source code and say it implements that certain requirement. Sort of like a chain of responsibility for code and associated documents. That might be a pipe dream, I don't know.
Anyway, thanks for the information, I remember seL4 being mentioned in discussion of the sorry state of software on medical devices, but had forgotten it until you mentioned it. Thanks.
Every other discipline does simulations because actually running the experiment is magnitudes more expensive and time consuming.
See "Why software development is an engineering discipline"
No, what turns programming into engineering is having a rigorous understanding of proof methods and being able to apply them to solve problems the way solid science is used to solve problems in other engineering disciplines. Civil engineering and electrical engineering and mech-e and so on became engineering through the discovery of underlying principles that allowed processes to be formalized, standardized, and improved.
Formal methods have to be at least part of the answer here, unless you can think of a better way to formalize, standardize, and improve the process of software development.
To believe that, you have to believe that simply adding maths to something turns it into engineering. I think this is fairly obviously not the case.
You're also misunderstanding me if you think I think "adding maths" to CS will turn it into engineering. We're already doing math! I think CS could be made a lot more solid by being honest that we are doing math and adopting the relevant formalisms that already exist to make reasoning about mathematics easier.
Do you just disagree that that would help, or do you reject the premise that it could be made better entirely?
Now what does make mathematics slightly more rigorous than other disciplines is the fact that once someone bothers to actually work through the details and finds a counterexample, they can convincingly demonstrate to other mathematicians that the proof is incorrect. Often, the detail in question isn't even all that important, and a workaround can be found that saves the overall proof. But software development is also rigorous in this sense; once you find a case the program handles incorrectly, you can write a testcase to show the difference between expected and actual behavior. And that doesn't usually show the whole program to be misguided, you can just rewrite a small part and things work again.
You seem to want a method that can not only make definitive statements after the fact, but that can actually ensure for almost all programs and almost all desirable properties that the program fulfills the property. But this is actually much more rigorous than most mathematicians ever bother with, and for good reason.
Complete verification requires stating every obvious fact in excruciating detail, because that is the only way to be sure that it is indeed obvious and a fact; in addition to tracking the complex interactions of the whole thing. Most humans really don't want to make this kind of mental effort, if they can avoid it. Even static type annotations are too verbose for some, which has lead most modern languages to include some form of type inference. I don't think you will see widespread adoption of formal methods before proof assistants are developed that can similarly handle most of the simple but tedious tasks, so that humans can focus on the actually important bits.
It's also not specifically the rigor in mathematics, but the grounding in solid principles. They have axioms, they know how to prove things, and they know which proofs to trust and why for the most part. What frustrates me most is stuff like programmers haphazardly reimplementing monads over and over again instead of moving on with what they were actually working on before the language and type system got in their way.
Where did the formal specification come from?
How do I know the formal specification is what I actually wanted?
Is the connection between the informal requirements to the formal specification easier to trace/follow than the connection between the informal requirements and the code?
There are two separate problems here - one is that you might not be able to clearly explain what you actually want. The other is that you might simply fuck up when writing the formal spec (code). Formal verification is meant to assist you in the latter - when you know what you want, but can make mistakes writing it down, or not realize your description is self-contradictory in some aspects.
I'm working on a radio product, it's clear how data is being moved into certain buffers. It's clear when data is moved into certain buffers. It isn't clear why data is moved into certain buffers, or if the code is correct, only that it's being done.
A higher level specification set allows us to have a documented understanding of why the code does what it does, and allows us to reason about it at that level. It also makes it feasible to bring new people into the project, because 150k sloc (not a huge project, but not small) of largely low-level code is not something someone new can jump in on and understand quickly.
We also have something that's much easier to reason about when designing system tests, and to test the code against. We write the tests as though the specification were the reality, and test the code against that model to detect where it diverges. If we only had the code, what would we test it against? If it's its own spec, then it can't be wrong.
Yes, people tend to forget that. Having a second formal specification can be helpful, because writing things down multiple times (differently) often helps understanding.
What I've seen so far is that (non-code) formal specs can be very useful when the domain is highly technical, for example network protocols, because they illuminate aspects that are hidden in "production" code.
Of course the fact that important aspects are hidden is a more general problem.
With that said, if Microsoft Word meets the given spec, then it's correct. One way to prove this correctness is with testing.
This is the case for bridges as well. If I hand a spec to an engineering firm requesting a bridge from A to B at any given price. Well then, there's a lot of bridges that satisfy those constraints.
Just because many software companies today don't choose to formally specify a spec and prove a programs correctness upon delivery doesn't mean it's a special snowflake. It just means those teams are immature.
Sometimes software development doesn't require a mature team. Like most decisions, there's a cost and timing tradeoff.
The problems that matter are in the design of the spec to begin with. The important part is picking the CORRECT business requirements, NOT in implementing those business requirements correctly.
And do you know what the best way is to test whether you got the spec right? You deploy and see if the feature gets traction.
The "spec" that matters is the success of the business.
And building a product, and releasing it, is the way to formally test if your feature is "correct" according to the spec that is The Market.
It's both, though. You also can't tell if you've implemented them correctly if you don't have a formal spec.
If I make a mistake building my web app, then it will either not matter, or someone will notice the bug in production.
And then when someone notices the bug in production, it can be fixed. If nobody notices it, then I guess it wasn't very important to begin with and can be left "broken".
Implementation bugs are the EASY part, and don't matter a lot of the time.
Or here is a better scenario.
Lets say I am writing a feature, and I think of a way to implement the feature much quicker, but isn't 100% "correct" according to the spec. The quick and dirty, but "incorrect" way to implement it may actually be a better thing to do, because now I can spend my time working on other stuff that is more important.
Purposefully doing the "incorrect" thing according to spec, may actually be the right decision.
You just described the responsibility of a product manager, and finding product-market fit. None of which requires software development. Software development may help but product management certainly doesn't require it.
Designing and building software are now the same thing. And that is only going to become increasingly true. Designing solutions in the context of modern systems inherently requires intimate knowledge of those systems and the capabilities of the technology being used to solve the problems. The "design/spec/build/test" pipeline is dying.
You can recognise that that's a good thing and get on board or you can be left behind.
You misunderstand what a spec is. And you misunderstand what product management is. At no point did I say software engineers aren't responsible for designing and building software. And at no point did I say designing solutions doesn't require intimate knowledge of those systems and capabilities. In fact, that's exactly what I've been saying.
That's what software engineers are there for. To provide expert knowledge and guidance.
But, do you think the electrical engineers defined the spec for One World Trade Center? No they didn't. They got the spec from the architects and they worked to satisfy those requirements.
The spec is not static. There is no design/spec/build/test pipeline in the strictest sense. The spec is dynamic. It's updated through design/spec/build/test iterations.
Just like an electrical engineer may surface new knowledge back to an architect that requires the spec to change... or an electrician in the field will surface new knowledge back to the electrical engineer that requires the spec to change.
Changing the spec is expected. It's called change management.
I don't need to get on board with anything. Google isn't getting left behind anytime soon. And the way they approach software engineering is largely the correct way; I agree with it. (They do however lack coherent product management in some areas.)
Product development skills are extremely important for a software engineer to have, especially at smaller companies, because a lot of the time the person making these product decisions IS the engineer.
During my engineering career, most of the time my boss gives me a general goal for a product or feature that needs to be built. And then I take that general idea for a product, and make all the product decisions about what to build and how to build it MYSELF.
At smaller companies there may be NO product manager. Or maybe the product manager is only making very high level decisions, and isn't really involved in every little nitty gritty detail about the product.
You the engineer have to make the product decisions. And you have to balance those product decisions against tradeoffs, such as how long would it take to build, how high quality it is, and other software engineering design tradeoffs.
Product design and software engineering are very closely related, and any good senior engineer should be competent at both. Software engineering makes you better at product design, and vice versa.
Yes, most startups don't formally define their spec. That's what I've said.
Managing the spec and ensuring product-market fit falls under the domain of a product manager.
So, that's great, you did software development and product management. You wore many hats... like most people do in startups. Doesn't negate the fact that you were a software engineer taking on additional responsibilities.
Trust me, when you work on a team with a clear separation between product management and software engineering and you have a great product manager... it is pure bliss. The only company I've ever felt that technical nirvana with was Google. My god they know what they're doing when it comes to software engineering and separation of responsibilities. At least the team I was on did.
For example, I can have a requirement that says: "The program shall generate a set of weekly time schedules that are maximally preferable, based upon each students' preferences, for all students at a given location while taking into account resource constraints re: room availability and teacher availability." Of course, this isn't specified to the level of detail I'd put in a real spec, but it serves as an example.
There's many ways to solve this requirement, i.e., there are many different programs that satisfy this spec.
There are two solutions that may stand out: (1) a brute-force approach, and (2) a convex optimization approach.
Since I've not explicitly defined a speed of execution requirement in this spec, then a brute-force approach may make sense... even if it runs for 5 days. In the real-world, you'd confirm this undefined assumption.
If instead my spec said this needs to complete within a day, then maybe the convex approach would make more sense. However, you'd first seek more definition in the spec, i.e., how many students are we generating schedules for? If it's only 10, maybe brute-force, if it's 20k, then convex.
So on and so forth.
Now, the interesting part comes when you create the end-to-end (E2E) and acceptance tests. These tests should be the first thing you write because they follow directly from the spec. They will stand up the program as if it's running in production and drive it as such to test whether it adheres to the spec.
Once all your E2E and acceptance tests are passing, and we've eatablished that your tests cover the spec completely, then we can mark the program as correct with respect to the spec.
There are many different software designs that satisfy a spec.
The software is not the spec.
Put another way, the problem with having a "software spec" is that you already have the "software which is a spec" and now you have 2 specs.
Or a 3rd way, figuring out the right interface for 2 components is at least 50% of the work when writing software. So, spec writing is like programming. And you wouldn't want to program by committee, would you?
This is one thing Amazon got right - having small internal departments and interacting like separate companies.
Spec writing is not like programming, and if it becomes like programming, then you're doing it wrong.
Specs are meant to constrain the visible solution space by defining the problem sufficiently.
To put it in programming language terms: a spec is declarative not imperative.
Though imperative programming looks at the problem from another angle which is closer to execution details, it's not fundamentally different. A smart compiler/interpreter may completely ignore those details as long as it produces the right output, as specified by the spec. That spec being the code.
This is obvious when you see that one style of program can be converted to another style without losing the semantics.
> whether the output solves the problem rather than whether it satisfies some arbitrary behaviour
The spec is considered the solution to the problem! It's not arbitrary in any sense of the word.
The problem with "specs" is that you don't know whether they solve the problem until the solution is implemented and shipped. And even then, you have no real way of knowing what's meat and what's cud. So lots of time, money and people are thrown at solutions that - at best - are wildly bloated or inefficient and - at worst - completely fail to address the actual problem at hand.
The _need_ for specs is - even in large organisations - typically down to misplaced accountability. When you decompose the problem space into small pieces and give development teams a high degree of problem-solving autonomy _and_ accountability for production, the organisational disconnects that lead to the "need" (or, rather, the ill-conceived desire) for burdensome process and specification largely go away.
This isn't witchcraft - it's progress. It works; and it works in large organisations. You haven't witnessed it, so you don't believe me. If/when you do, I'd be willing to bet you'll be converted as I was.
But I'm sure I've given you plenty of reasons to double down on your skepticism.
Please, do detail.
> The problem with "specs" is that you don't know whether they solve the problem until the solution is implemented and shipped.
No. Good product management eliminates this risk. And that's what you're talking about: risk.
You can test a product, feature, anything, many different ways before you build an actual implementation.
That's basic product and risk management.
> And even then, you have no real way of knowing what's meat and what's cud.
> When you decompose the problem space into small pieces and give development teams a high degree of problem-solving autonomy _and_ accountability for production
You just like... defined what a spec is... man.
> This isn't witchcraft - it's progress. It works; and it works in large organisations.
Please provide the proof to back this up. Otherwise it's a baselsss claim.
> You haven't witnessed it, so you don't believe me.
No, I've arrived at my beliefs through the data on this point.
And the fact of the matter is that 100+ successful companies that have shipped successful products/projects operate with specs across many different industries from pharmaceuticals to construction to aeronautics and so on and so forth.
If the data supported your conclusion, then I'd agree with it. But the data does not support your conclusion.
This whole discussion can be replaced with this sentence.
Please tell me, what is the specification of this code so that we can verify and validate it:
(defun f (x y) (* x y))
Perhaps. Perhaps it was supposed to be addition. Perhaps it was only supposed to apply to integers. If the above spec is correct, is the code correct? Maybe. It doesn't react well when given non-numeric values. Is that a problem? I don't know, the code doesn't explain who is responsible for validating input and who is responsible for handling errors.
A specification is a hybrid prose/formal document that would give us all that information (if it had any value). The code above is not a specification, it is an implementation. No different than a gear or a cog or a lever in mechanical engineering. It is a thing which does some work. We can examine it and see what it does. But we cannot, by observation or execution, determine why without greater context. That context is the specification.
The software is an artifact, one among many, which (hopefully) satisfies a specification.
Can we use those? By your criteria, I mean.
You need science to come up with it and engineering to apply it in practice.
> why the author is so suspicious of
> formal methods
I think those of us who promote formal methods need to remember this. At best only verification -- making sure the implementation matches the specification -- will ever be fully automated. Validation -- making sure that what we specified is actually what we wanted -- will always be a human activity.
By the way, Curry Howard is just one way of doing formal proof (one I personally don't like). There are many foundational and practical problems that need to be solved before formal proof is ready to go mainstream (but I am convinced that it will one day).
Software Engineering: Requirements, Modifiability, Design Patterns, Usability, Safety, Scalability, Portability, Team Process, Maintainability, Estimation, Testability, Architecture Styles.
Computer Science: Computability, Formal Specification, Correctness Proofs, Network Analysis, OS Paging/Scheduling, Queueing Theory, Language Syntax/Semantics, Automatic Programming, Complexity, Algorithms, Cryptography, Compilers.
In my opinion, some of those could be on the other side of the line (estimation could be CS, language syntax/semantics and network analysis could be SE). But I agree with the general division.
I studied Electronic Systems Engineering, but somehow always found jobs in software companies. One problem I struggle with is the division between DRY (Don't Repeat Yourself) and WET (Write Everything Twice) coding styles.
Most programmers hate it when code is repeated. They prefer to spend days trying to integrate external libraries instead of just copying the necessary functions into the main branch. There are good reasons for this (benefiting from new features when the library gets updated), but there are also risks (the code breaking when the library gets updated).
Software Engineering priorities include Safety, Portability, Modifiability, and Testability. I interpret that as a WET programming style. "If you want it done well, do it yourself." There's no arguing about responsibility then - the code is mine, and I should fix it if it breaks.
Say, for example, you have a complicated condition you test for frequently within your code. DRY is when you decide to extract that condition into a testable function you can rely on everywhere in your code (e.g. `isLastThursdayOfMonth(date)`) You can extend this same DRY thinking to all the other abstraction tools (e.g. types/classes) you have as an engineer too. I'm sure you'd agree that it would be an enormous liability and maintainability nightmare to rewrite the logic for that function everywhere. God forbid you're ever asked to change your littered logic to the equivalent of `isLastWeekendOfMonth(date)`.
None of those demand “write everything yourself”, only setting the same criteria for external code you integrate as you would have for code you write yourself.
This is the entire point of Semantic Versioning: to communicate breaking changes through the version number, and to build tooling to programmatically avoid breaking dependent code.
(No, it isn't generally perfect: it does require that human realize what the API is that that a given change is breaking it. If we had some programmatic language for specifying the API… type systems start this, but tend to not capture everything¹)
¹I suspect there are some formal analysis folks who know more than I do here, screaming that there is a better way. I work in Python day-to-day, so generally, it's all on the human.
If you're a fan of semver, be warned.
It's interesting/funny that when talking about CS, or more academic point of views around software development, the terms "Formal Specification" and "Correctness" are often mentioned, yet most CS students/labs still use languages that are really badly suited for this job, such as dynamically typed languages like Python.
I hate large python code bases with no strict types specified anywhere. It's a nightmare to maintain that code.
This very much requires "understanding underlying patterns" - what knowledge does your program encode? How can that be broken apart and localized?
Second, if two pieces of code happen to look identical, but each is the way they are for different reasons, they encode different pieces of knowledge and collapsing them is not DRYer. As I've said before, in that case you're not improving your code, you're compressing it. I like to call that kind of overzealous misapplication of DRY "Huffman Coding".
Over the decades I've met a bunch of people who program computers for a living, and there is clearly a spectrum where on one end is a person who spends the weekend benchmarking different sort algorithms under different conditions for the fun of it, and the guy who left the office at 5PM once an integration test passed on a piece of code that he pasted in from stack overflow was deemed to have no regressions. There are many different disciplines that have such a wide dynamic, from chefs who spend their weekends trying different flavors to cooks who take frozen patties out, reheat them and serve. Painters who throw human emotion into a painting and painters who lay down a yellow line on a road according to a template for $20/hr.
It seems to me that most, of not all, of the 'theory' stuff in computer science is just math of one form or another. This is not unlike all the 'theory' stuff in electrical engineering is just physics. You can do the tasks without the theory, but you rarely invent new ways of doing things without that understanding.
But just like carpenters and architects there is a tremendous amount of depth in the crafting of things. That brilliance should be respected, college trained or not, so trying to 'split' the pool doesn't lead to any good insights about what being a good engineer is all about.
When I started managing programmers I realized the best thing I could do was to manage people by agreed upon deliverables for their capabilities, quality, and maintainability. And if it took them 10 hrs to do it or 70hrs didn't matter. As long as I understood, and they understood, how long it would take them to do something we could manage to that schedule.
This isn't going to be popular, but it's true.
Coders are what people called themselves before business started making the decisions about how to write software.
Engineers are what people called themselves after business started making the decisions about how to write software.
Guess what? People who write software aren't engineers, they are programmers. You have crap programmers and you have exceptional programmers, but you they are paid to write programs. "Software Engineer" is as valid as a "Sanitation Engineer."
Coder is a slang for programmer because they write "source code." it dates back to at least the 80s, probably earlier. It is also acceptable term for programmer.
If you have to make up a fancy term, call yourselves software developer or software designer. If you want to be called an engineer, go to engineering school.
For me, I have the opposite reaction to coder vs engineer.
For a coder, I think code monkey, someone who writes boilerplate MVC apps and can't handle algorithmic complexities, write code that others can understand, or consider lower levels of a system they are interacting with. Also synonymous with hacker, which I would bet also has a reversed connotation for those who see coder as a good thing.
When I hear engineer, it's synonymous with the jobs at the big name companies and implies the person thinks their job is complex enough to warrant the title, even if not deserved by the standards of other engineers.
Of course, all of this is semantics, including the debate over what is an "engineer". In the end, it's pretty meaningless in terms of impact. CS vs SE I think should be the focus, as yes, the two can be quite different, even if most CS degrees end up working in SE.
Yes, I agree. "Coder" and "Programmer" are grey beard terms.
>Also synonymous with hacker, which I would bet also has a reversed connotation for those who see coder as a good thing.
Yes; a hacker is someone who can bend the computer to their will, even when it isn't supposed to do it. That's one of the reasons people who crack software and infiltrate systems sometimes get that moniker.
>implies the person thinks their job is complex enough to warrant the title, even if not deserved by the standards of other engineers.
Ya, that's kinda the problem. It has an inferiority aura about it.
> In the end, it's pretty meaningless in terms of impact. CS vs SE I think should be the focus, as yes, the two can be quite different, even if most CS degrees end up working in SE.
It absolutely is meaningless, I guess that's what bugs me about it. CS is for creating an improving algorithms. John von Neumann discovered/created merge sort in 1945 (among other things). He is CS. "Engineering" is taking merge sort and using it for efficient joins in an RDMBS; applying science to an actual thing.
BTW, I've been a "Software Engineer," and "Software Architect" for 20+ years. Don't get me started on people who call themselves "Software Architects" that don't even code out their designs.
Look at the origin of the word:
>Middle English (denoting a designer and constructor of fortifications and weapons; formerly also as ingineer ): in early use from Old French engigneor, from medieval Latin ingeniator, from ingeniare ‘contrive, devise,’ from Latin ingenium (see engine); in later use from French ingénieur or Italian ingegnere, also based on Latin ingenium, with the ending influenced by -eer.
Electrical engineers certainly aren't building fortifications and weapons, yet they have it in their title. Software controls the behavior of physical systems just as much as electrical engineer-designed circuits do.
The only people that get pissed off by the use of the word are people that think people who write software are intellectually beneath them.
>Why not call yourself a "Software Doctor?"
If you have a PhD, that would certainly be fine. Doctor is pretty well-defined throughout the history of the word.
To become licensed, engineers must complete a four-year college degree, work under a Professional Engineer for at least four years, pass two intensive competency exams and earn a license from their state's licensure board. Then, to retain their licenses, PEs must continually maintain and improve their skills throughout their careers.
>The only people that get pissed off by the use of the word are people that think people who write software are intellectually beneath them.
Great theory presented as a fact, but I write software for a living, so I guess that theory just flew out the window.
>If you have a PhD, that would certainly be fine.
Just like calling yourself an engineer if you actually had a degree from a school of engineering would be fine too ...
Honestly, if software engineering were treated like real engineers with licensure, required degrees, etc, we'd make a whole lot more money. Companies like to call us engineers because they can blow flowers up our posterior in lieu of actually paying us for it. Personally, I'd rather make double than feel good about myself.
Your argument as well as that Atlantic piece is just "they don't do the same things to what previous engineering fields did so it doesn't count." If that kind of stupid logic held, there would be no such thing as petroleum engineers or electrical engineers because neither of those were things for a long time.
You seem to have a completely wrong understanding about why people do a PhD. They (okay, most) don't do it to attach a title to their name but because they are passionate about a specific field and want to expand their and other's knowledge about it.
This is often interpreted as, "you can't hack it in the real world, so you hide in academia," and in some cases that is true. But really it has more to do with what problems you are interested in solving/exploring.
Jokes aside, software engineering discipline of its own would not give you skills needed to accomplish that.
This is quite inaccurate. Hardware directly influences software. "if" statements, functions, and threads didn't exist at one time, and all require explicit hardware support. I believe that as we come up with different abstract constructs at the hardware level, we'll influence the possible software that can be written.
Many early computers had very rudimentary subroutine call mechanisms (e.g. the B-line of the Elliott 803), but this didn't prevent programmers from using functions which returned values, sometimes recursively.
Burroughs mainframes were designed to run Algol 60 (with a few additional instructions for use by COBOL programs), and Lisp Machines were designed to run Lisp. In these cases, the influence of the languages extended to the entire instruction set. This is a better approach, as it's easier to experiment with language design than it is with hardware design.
This happens more often than you'd think. Intel (and later, AMD) added AES primitives to their instruction set to speed up encryption. VT-x (and the AMD equivalent) were both designed to improve the performance of virtualization. Outside of the realm of CPUs, the use of FPGAs -> ASICs for accelerating bitcoin hashing certainly wouldn't have existed if not for the software. Hardware support for CUDA / OpenCL accelerated existing parallel workloads.
> This is a better approach, as it's easier to experiment with language design than it is with hardware design.
FPGAs certainly lower the barrier to experimenting with hardware design, although yes, it's probably still higher than language modifications.
Quantum computers would certainly not be considered "essentially equivalent".
To me, the essence of software engineering is that 20% is about building the 'good' solution itself, e.g. architecture, code, release / deployment, ... the remainder of the engineering is navigating / tolerating the inherent corporate messiness of politics, opinions, power, and everything else... engineering the solution is the easy part; engineering good requirements and quality is tough.
Science -> Engineering -> Technology
: or empirical knowledge
: or maintain or implement
: or processes
The relationship between science and engineering has been clear for a while now, even before the appearance of software engineering.
There's a lot of science at work in existing software, so it would be inaccurate to say that software is "unscientific". However not many people get to work on those projects.
A vast majority of people can make a decent living working on user facing technologies built with existing technology. At that level appealing to non-technical stakeholders has much more weight than applying engineering rigor.
But that's not the reality for everyone.
Similarly I'd say in software the engineering bit is making reliable systems that are fault tolerant and secure and so on and then the people bits like the user interface are something like design and psychology, not engineering.
Software development, app development, game development, web development are all probably 90+% software engineering and 1-10% computer science depending on the project. Specific projects may differ such as writing standard libraries, engines, data, standards, teaching, etc. In the end most of it is production and maintenance as part of shipping.
I studied Russell, Godel, Tarski and Quine and then compiler and runtime logic (as a Philosophy major). Back then CS was mostly a realm of 3-Page proofs on alpha renaming or newfangled Skip List speed/space utility.
As an old VAX/Sun or 512K/DOS C programmer working in DC for decades around lots of TC, datacenter and transaction processing folks, an SE MUST have basic speed/space, set theoretic, programming by contract, data integrity and MTBF abstractions in their heads while they plan and develop. Both accuracy and performance against test and measure just matter for the business cases 24/7.
Content software developers patching together framework components on 2 day schedules for consumer Web bloatware rarely understand something like data integrity needs of billing system logic embedding in redundant switches failing over on rough schedules. Typing commands is not even Software Engineering.
Software Engineering is not an individual identity phenomenon. SE is how groups show responsibility for stakeholder outcomes unrelated to paychecks. First rule of SE is everyone on the team passes the bus test. Nobody is essential. Unless we seek luck, we can't improve what we don't measure. Learning how and what to measure takes real training and group method application. So many out there never know what they are missing.
Business competition minus lucky windfalls is largely based on COST ACCOUNTING. Successful operations will discover heat dissipation costs challenges. Basic CS speed/space, contract covenant assertions, data integrity and MTBF logic in Software Engineers translates very easily into understanding business innovation problems.
If it hasn't mattered to you, it's probably because you are using libraries or apis which have solved for optimal performance.
In short, performance mattered a lot to you code. Only, you didn't slog long hours to make it so.
Back to the topic at hand, if you didn't spend time to understand why a particular module or library is part of your code base - be it for performance or maintainability or any other -ities - you're halfassing your job as a software engineer. Would a structural engineer ever claim with a straight face that they have never worried about the integrity of their struts? That's basically what you said with your claim.
Take a Python library like requests. Who the hell will read the source code and run a profiler on that module if you see a consumer? I don't care until I have to. Perhaps before shipping my production code I can run a profiler. If you are going that deep at the beginning, you are wasting your time instead of building MVP and iterate. Library's code out there is ever changing. One version can br slower than another. You aren't doing meaningful work if you start with performance.
OP didn't say he/she is picking a random library out of the blue. I don't remember reading that "random" part.
Just because some hotshot (PG?) said to not optimize prematurely doesn't absolve you of your responsibility towards picking a library to use. Don't optimize early by all means but at least know why the library you picked might not be a great choice for problems you face in the real world.
Do you write your websites in C outputting raw HTML? I'm sure you don't. Clearly you made some sort of reasoned deduction about the tools at hand and went with one which got the job done. Why did you not pick C code spitting out HTML? No point premature optimizing your productivity, right?
Like a good astrologer, I'm going read into your comment to assume support for my larger point - know your tools.
If the project you are taking on is to optimize performance existing codebase, then yes, you worry about performance, because that is your primary objective.
I do care about performance myself. I did look up which Python json implementation library out there is the best in terms of performance as well as whether the library is actively used and developed.
But that's only because I knew from the beginning json marshal and unmarshal are expensive. However, I stopped worrying now and only use the native json module that comes with the standars library because I see no gain for my projects. Perhaps that matters if I am Google. A 10ms gain was not even a problem for me in my projects.
Anyway, back to the argument. Let's take architecrure and structural engineering. Building a skyscraper is a complex task. Everyone wants the next skyscraper to look different and taller. But no architects or structual engineers I am an acquaintance with would start the question "how do I reduce the cost? How do I make my building taller while maintain resistance to 9.0 earthquake."
Those are concerns, but they will use whatever knowledge they already have to draw a model. Then they run simutations and go over challenges and problems they need to resolve to meet the requirements. No one starts the actual project by looking at how much they can save.
The only people in computer science and software engineering would always bear performance in mind from the very first step are computer scientists. No one design an algoritm or a new data structure or a novel method to build origami unless the purpose is to find a better complexity (space and run time). But I want to emphaize that software engineers are computer scientists if they want to claim to be one. A formal degree is not a requirement to be a computer scientist. A good software engineer does take performance into account, but not until some MVP working code exists. One might implement the solution using quicksort knowing it is easy and effiecent enough, until they recongize qs is not fast enough then another sorting method maybe used or developed.
Seriously, stop worrying about performance. Don't consider it when picking libraries. You'll write better programs as a result. I know how implausible it is, but it happens to be reality.
Lately there are sprouting more and more SE degrees.
On the other hand we also have universities of applied science, where Informatik is often more like SE
But look, what the math and science sides of the room throw at us definitely informs the engineering. In every other engineering principle from architecture to ditch digging, there is a feeder system from a variety of mathematical and scientific disciplines. While many other engineering disciplines are well established, they are not immune to this and in general don't begrudge it.
Doctors are required to keep up on the state of treatment. Architects need to keep up on materials science AND new mathematical modeling techniques and tools. Car designers care about new discoveries in lighting, battery and materials technology.
Here's a good example of the kind of stuff we all should be on the hook for. I've tried to push this paper up to the front page a few times now because it's roughly the same as if someone walked up and calmly announced they'd worked out how to compress space to beat the speed of light:
Folks are generalizing linear sort algorithms to things we thought previously were only amenable to pair-wise comparison sorts without a custom programming model and tons of thought. No! And then a famous engineer-and-also-mathematician made an amazingly fast library to go with it (https://hackage.haskell.org/package/discrimination).
We're seeing multiple revolutions in our industry made of... well... OLD components! While deep learning is starting to break untrodden ground now, a lot of the techniques are about having big hardware budgets, lots of great training data, and a bunch of old techniques. The deep learning on mobile tricks? Why that's an old numerical technique for making linear algebra cheaper by reversing order we walk the chain rule. O(n) general sort is arguably bigger if we can get it into everyone's hands because of how it changes the game bulk data processing and search (suddenly EVERY step is a reduce step!)
We've similarly been sitting on functional programming techniques that absolutely blow anything the OO world has out of the water, but require an up-front investment of time and practice with a completely alternate style of programming. But unlike our fast-and-loose metaprogramming, reflection and monkey patching tricks in industry these techniques come with theorems and programmatic analysis techniques that make code faster for free, not slower.
Even if your day job is, like mine, full of a lot of humdrum plug-this-into-that work, we can benefit from modern techniques to build absolutely rock solid systems with good performance and high reliability. We could be directly incorporating simple concepts like CRDTs to make our systems less prone to error.
It's our job (and arguably it's the hardest job of the field) to dive into the world of pure research, understand it, and bring what's necessary out to the world of modern software. That means more than just tapping away at CSS files, or wailing about NPM security, or shrugging and saying, "Maybe Golang's model is the best we can hope from in modern programmers."
While we're drawing distinctions stop calling yourself an engineer unless you're legally licenced as one. Programming may share similarities with engineering but it lacks the professional accreditation and liability.
The folks in fields like mathematics or physics didn't used to consider "Computer Science" as "real science". As fun fact, there were no journals on computer science for quite long time. Researchers like Dijkstra would identify themselves as "Mathematician" and publish their now very well known algorithms in mathematical literature :).
My half-assed analogy:
CS is to SE as Physics is to Mechanical Engineering.
In both cases, it's unwise to trust one category with screwdrivers...
In Engineering you have architects/industrial designers etc. They work out the product specifications and then ask the engineers to deliver an efficient workable solution that fits those specifications.
Sometimes at least from my view it feels like software engineering reverses the two roles. i.e The Engineer supplies the api and customer works around design. Think about something classic like Unix it feels like in some cases engineering has constrained the design rather than design constraining the engineering. This is not necessarily a bad thing but it is different.
I've always thought programming would eventually go the way of mechanical engineering with engineers doing design vs fabricators doing the manufacturing. The closest we've come so far in software I think is having one person doing architecture or write a spec and others implement the code. Not quite the same but I wonder if we'll get there eventually.