A couple of things I don't see mentioned (apologies if they're in there, I haven't read every word):
1. Process does (and should) vary tremendously depending on factors including the organization size, organization maturity, market maturity, experience level of the people involved, budget, etc. The book seems to suggest that there's a one-process-to-rule-them-all.
2. Often there are significant unknowns about a project : unknown technologies, unknown market needs, unknown requirements. Being able to accommodate the unknowns, which can mean not expending effort trying to know something unknowable, is important. The book I think gives an impression of quite confident smooth progress toward project completion that I personally have never observed.
Also: The value of Rubber Chickens is not mentioned...
It helps to take a break sometimes.
There's so much to cover, I started out with the formal stuff first. It seemed like a good order, but perhaps it is a bit off-putting at the beginning of the book.
I'd see strict adherence to any sort of one-size-fits-all recipe for workflow management as a greater sign of disaster than healthy compromise, accommodation for individual working styles, and respect for the human need for autonomy and self-direction.
 By fake I don't mean fraudulent but rather something that has as its only purpose to meet the needs of the process enforcement. For example a requirements document that nobody reads after it has been written, or a design document written before the manner in which the problem will be solved is well understood (and is therefore never read). Project plans that somehow turn out to have underestimated effort by 3X...
See, I think this is actually intentional. The "process" itself exists to give management a surface area of metrics they can datamine for their own political needs, more or less independent from the actual success of the project or team.
I liken it to creating Dutch books. With each layer of management you go up, the pressure to create Dutch books that provide you with blame insurance and protect your bonus -- crucially, diversified across all different business outcomes -- gets more intense, and the people who pull this off get compensated hugely and can often make leaps into C-level executive teams.
Lower level managers might not consciously know about this, and on one hand may actually think there's some truly redeeming reason to care about the frivolous, irrelevant metrics (e.g. Agile burndown). But the higher up you go, the more directly and unabashedly people openly function honestly -- they know the metrics are bullshit; they know that you know that they know the metrics are bullshit ... they don't care ... they are just getting on with their own business of political games to diversify away bonus risk and arrange for metrics-based blame stories for scapegoats.
It's not that it's confusing where the credit lies. It's that people know the credit does not lie with the formalized process at all but that they need that excuse in order to politically manage how they get their share of the credit.
Unfortunately, I think despite the OP's strong efforts to present the topic in a useful, taxonomic way, the sad thing is that treating formalized process with this degree of veneration at all simply perpetuates the political system that needs inexperienced developers to roll over and play dead to the system.
My first "favorite" comment!
The book does take the time to provide alternates at many points. It should explain "what really happens," but it first does that for waterfall, then spiral, and agile, etc. If you are reading the waterfall part, for example, your "really happens" is going to be different if you are currently working under Agile. To avoid info-overload I've tried not to explain everything at once, but of course that means valid points have to wait until later.
Watching an organization stress in vain to assert a one-size-fits-all approach, such as letter-of-the-law Agile, is unpleasant, especially once the political games set in. Even worse to be on the team and actively shunted from being able to do your job because of all the boilerplate, parochial adherence to cargo-cult process.
Instead, do whatever works organically for your situation. Don't say to do whichever established, brand-name process (or, worse, some shiny new thing) -- that automatically casts the decision as if it has to be limited to a choice between a few specific (and all bad) options.
To be clear, there's a difference between saying "do whatever works organically based on the human properties of your team and your situation" vs. "do whichever one of these 2 or 3 big box things you can maybe lobby for."
It's like saying, "Here's a first course on how democracy works ... you've got Republicans and Democrats so pick one of them that suits you better." It teaches you to think that democracy (analogue to the ideal of a successful dev process) is intrinsically delimited into certain categories like Republic, Democrat, etc. (analogue to Agile, Waterfall, ...).
I think it's better that young impressionable minds are more open and not made to think in such constrained, delimited ways right from the start (even if real life will beat it into them later on).
All these things vary all the time. When I say "whatever works" what I mean is no one knows what will work for your team. No textbook will know that. It might consist of some composition of management patterns from some books, but often it will not, and you need to develop an eye for deviating from that. There is no recipe. There is no cluster of things that all the good companies had in common -- they all did (and still do) things very differently. Amazon treats people like shit and hammers on them -- and delivers good products. Google forsakes some income opportunities to selectively uphold their "don't be evil" mantra. Apple puts usability ahead of everything. Microsoft puts enterprise saleability ahead of everything.
All these different value systems, management systems, workflow monitoring processes, etc., etc., have places and times when they'll work. If you're stuck thinking in terms of rigid structures, you won't see it, and it truly can make a big difference.
In every job I've ever left from it has been because the process and experience of being managed in that role had led to burnout even after talking openly with managers about what wasn't working. I'm (perhaps foolishly) a very loyal guy. I'd love nothing more to stay at a company for a long time and get in a groove, even if I was leaving money on the table by doing so. But I can't seem to find any organization that values human-affirming management concepts over and above ritualistic adherence to bureaucratic process.
Anyway, I just see a lot of "the road to hell is paved with good intentions" in this. If you are not already familiar, you may enjoy reading Moral Mazes, because I think it's impossible to approach the task of software management without first approaching the generic task of human management and how it fits into bureaucratic machines.
I do exactly the same thing, for example i'm going to leave soon because the development process is so crippled that it just makes me want to cry. I tried to improve it and talk to management but nothing changed. I started to feel a void inside, and i realised that there was nothing i could except leave.
Also, from a practical standpoint, both the design and construction chapters are huge, so combining them doesn't seem to be a good idea. Perhaps I could add your warning though, if you don't mind me quoting you.
I suppose it isn't the only way to look at things, just common.
Though we are approaching the philosophical realm at this point… reminds me of the section of Philosophy class where you learn to question if you can even trust your own senses. I'm not sure, however this is a book for beginners, and I sit on the shoulders of those that came before. Not sure I'm qualified to reimagine software engineering from the ground up as you describe. If you write that book, I'd read it!
I am also writing a "this much I know" style book, and would be very interested to hear your trials and tribulations
I am very impressed you got from pile of notes to finished product - bravo.
In my world (enterprise software), the design phase is usually the most expensive phase, as it tends to be staffed with expensive architect/designer/technical lead-level folks.
Yes and it is a pretty good principle. The last thing you want in a software project is a design which changes all the time.
We can validate specifications for correctness with automated theorem proving! The least we can do is model-checking. At least for the hard algorithms and core system interactions. We've had this ability for decades.
The problem with software development is that we get all hand-wavy about "architecture," and "design." I try not to physically groan when someone starts sketching out a box diagram. They're fancy pictures, sure, and useful at a conceptual level... but the reality is that the software will look nothing like that diagram. And there's no way to check that the diagram is even correct! Useless!
It doesn't have to be painfully slow, burdensome, and expensive to employ these tools either. Contrary to popular belief it doesn't take a team of highly trained PhD physicists to build a formal mathematics for modelling computations these days. There are plenty of open-source tools for using languages like TLA+ that work great besides! Ask Amazon -- they published a paper about it.
Watch Leslie Lamport - Who Builds a Skyscraper without Drawing Blueprints?: https://www.youtube.com/watch?v=iCRqE59VXT0
I just took a short look at TLA+ and read a tutorial.
Don't think I could use it without drawing a picture.
A sketch is still useful the the absence of nothing else. But we don't build bridges and skyscrapers that way so why is it good enough for our data-centres and applications?
I don't think lives have to be on the line in order for software to be considered harmful if not built correctly. There are plenty of problems where having a thoroughly checked design will save you plenty of headaches and afford you many more opportunities.
It's necessary to have a formal language to describe this design. Such a formal language needs to have one and only one interpretation. We should be able to take a document written with that formal language and verify that it meets our assumptions. We should also be able to verify that the final implementation is isomorphic to the design.
Luckily such tools are ubiquitous. You can use whatever programming language you like to specify your design. In the case where the programming language is defined poorly, then you must also decide what build system and run time environments you will support ahead of time.
You can document your assumptions using the same programming language. This is made easier if you use a "test framework", but if needed you can also roll your own. You can validate your design by running the tests against your design. Normally it is best to break up your design into "units" that make it more clear how to validate your assumptions. You can then add some extra validation that the composition of "units" transitively validates your assumptions.
You can verify that your final produce adheres to that design because there will be a 1:1 mapping from the design to the final product (i.e. the final product will be implemented using the same code as your design). There are wonderful tools that will tell you the "coverage" of code used in the final product that has been validated against the assumptions.
Finally, you can even specify your requirements using the same formal language and verify that the design meets the requirements. Normally you do this by validating that the "integration" of "units" meets your assumptions. This should not be done without individually validating the assumptions on the "units", though, because the "integration" can lead to exponentially growing alternatives. Normally it is infeasible to validate each alternative.
Yes, this reply is tongue in cheek, but I am not ignorant of formal specification methods. They have their place. That place is currently not in a professional software development shop. We have better methods at the moment. Possibly formal specifications methods will improve to the point where we can reasonably use them, but we aren't there yet.
I disagree. Amazon has had great success employing TLA+ in finding bugs, testing design changes, and chasing aggressive optimizations .
Perhaps it is because there are myths that are still floating around regarding formal methods that still make developers cringe when they hear mention of them .
None the less I couldn't find reference to it in the book... did I miss it?
And besides... unit tests, I'm sure you are aware, aren't good enough alone. They can only increase your confidence in an implementation but they prove nothing.
If we want to start calling ourselves engineers I think we better start looking to how the other engineering disciplines operate. I don't think it will be long before an insurance company comes along and finds actuaries capable of monitoring risk in software development projects and liability becomes a necessary concern.
Anyway, I regret the tone of my previous message, which mostly made me look foolish, and thank you for your kind response.
I googled TLA+ examples and tutorials and I am wondering how I would apply this to an already existing application?
You don't have to specify the entire application to get the benefit of high level specifications. Even specifying the protocol between two communicating channels can bring benefits.
like there are many problems where text is not sufficient
like there are many problems where code is not sufficient
like there are many problems where documentation is not sufficient
like there are many problems where using the right tool for the job is not sufficient
I've seen plenty more work on UML as academics & commercial vendors are still all over it. I couldn't find an example for "Processing" because they picked the stupidest name possible: a word so overused I'm getting results from the food industry, compilers, IRS, and computers all at once.
So, UML would let you specify data and behavior then confirm properties about them, catch inconsistencies in requirements, or aid integrations. SysML is used for this in industry with verification results in academia even for UML as I showed. So, it's reality rather than theory even if you or I think better methods exist. I'll take a combo of Z, B, CSP, and/or Statecharts over UML anyday. Coq and HOL if I was specialist enough.
>There are plenty of open-source tools for using languages like TLA+ that work great besides!
Does your team know TLA+? Is it an efficient use of their time to learn it? Are a bunch of TLA+ beginners going to crank out properly written software?
Current research is taking it further with code generation from specs like AADL, UML, and especially SCADE/Esterel.
Or you're at the point where formal validation is a requirement.
There's nothing stopping one from using any of these tools as design tools. I'm working on a problem to check whether items delivered by a single producer on a FIFO queue with N workers can guarantee all work items will eventually be attended to. I could just write the code for that but I'll never prove the system works that way. The best one can do is gain confidence that for the prescribed scenarios it will work. You can get good coverage and use all of the tools we know to release something others may decide to use... but then you'll be fixing your errors after the fact when they are discovered and reported by your users. Or in the case of a system as large as AWS you may find that obscurity is no longer a comforting buffer... 1 in 100000 becomes a frequent occurrence.
update: There's nothing preventing you from only using formal methods on the critical parts of your system where a high degree of reliability is useful. One does not need to formally verify everything to gain the benefits.
> But if your customers aren't actually engineers themselves, formal validation is unnecessary and inefficient.
I think it depends on the difficulty of the problems you address with your software. If you're just sorting some lists or making system calls you might not want to bother. However if you want to guarantee consensus in the face of delays and partitions you'll need more than code to make any strong claims of efficacy. And if the public interest relies on your system I think it's necessary to serve them in the best capacity using the state of the art tools.
> Does your team know TLA+? Is it an efficient use of their time to learn it?
No. They could pick it up in a couple of weeks if necessary. It is an efficient use of their time: fewer bugs at scale means less downtime and less time spent chasing errors that slipped through.
> Are a bunch of TLA+ beginners going to crank out properly written software?
Are a bunch of beginner programmers going to crank out properly written software? There are different levels of experience and skill on any team. With training and diligence even beginners can learn to adopt the skill and ability to recognize when and where to use high level specifications and how to abstract systems into mathematical models they can test and prove.
Until then I suppose we have to live in a world where security breaches are common and the recall rates on cars will continue to increase as unreliable software continues to cause failures, lives, etc.
Or you could just prove from first principles, like we all learn in formal algorithm design theory.
The question is not whether proofs are valuable, the question is whether translating a system into TLA and using that proof checker is more reliable and saves you time over just attempting a proof directly.
Even if your TLA proof checks out, it may be a false positive because it doesn't accurately reflect your production code.
What TLA+ and other languages give you is an automated theorem prover or model checked that usually is built on some form of temporal logic and predicate calculus. This is especially good at modelling multiple communicating processes. Taking a logical approach to discrete maths has a few benefits here.
> Even if your TLA proof checks out, it may be a false positive because it doesn't accurately reflect your production code.
This is something one needs to be concerned with when writing high-level, abstract specifications. There's no way yet that I know of where we can synthesize the program from the specification although I am aware of research in that regard. However we can still gain the benefit of well-defined invariants and pre/post-conditions that we can use in testing our implementation. For now your implementation will be separate from your spec but you gain insights from the spec you would not otherwise if you had only written code.
update as to whether it is an efficient use of time to use a model checker or automated theorem prover... Well I think the reason for their invention was to specifically to handle the tedious task of proofs in formal maths. Some operations are tedious and mechanical and computers are better suited to the task than humans.
One area I find interesting is languages with dependent type systems like Agda and Idris. It seems like we're not too far away from being able to model and prove our specifications directly in the type system alongside the program that implements it.
Btw, besides the .svg problems, the kindle version looks fine otherwise on my iPhone, it is not squished, if that helps any.
Edit: have linked several dense images, thanks.
A pdf of the full book I'm not sure about yet, should I be concerned about copying? I'm new to authoring.
You shouldn't be concerned about copying no matter the format. Almost anyone who gets a copy of your book illegally is someone who would not have read your book at all if he couldn't have gotten it for free. And with these people, you are better of if they do get it; they may share it with someone who will become a customer, or they themselves may become customers in the future, when they learn that they like what you do and/or their purchasing habits change.
Some books are really worth having on your bookshelf.
Also consider a team/company license priced at some multiple (15x?) that will allow a manager or lead to buy your book for their company/team without having to worry about violating copyrights or managing buying the book for every employee.
 - http://blog.gumroad.com/post/40614820182/introducing-pdf-sta...
For some reason, iBook store does not allow purchase from my country. It's been like this for years now. Ditto with Amazon.
 - https://leanpub.com/authors
Btw, curious what country?
However, I'm not yet making the epub available. The book still needs a lot of work. When it gets closer to completion, I will probably will make standalone files available.
Thank you for making this book.
As a proponent of Continuous Delivery, I found the part on releases a bit old-fashioned and slightly disappointing.
I can elaborate a bit more if you want, but most likely you're already familiar with the more iterative and automated approaches.
Everybody has their pet peeves, I guess.
Software is often delivered with hardware or other services which have to have releases, hence releases are still very much a thing.
software world > web software.
Also, to help understand the new, it helps to know the "old." At least that's what I thought. A number of people are mentioning it, so perhaps I shouldn't have ordered it that way.
Yes, if you don't think the discussion in Ch. 7 is adequate.
...was meant for you. I knew you were in the thread but wrongly assumed you submitted the thread itself.
I'm violating my own principle, so I'll give an example: the book, Enterprise Rails, opens with a chapter titled, " The Tale of Twitter". Here's an excerpt:
> Because Twitter was the largest, most public Rails site around, its stumbles were watched carefully, and the steps Twitter took to alleviate its scalability issues were thoroughly documented online. In one instance, the database was becoming a bottleneck. In response, Twitter added a 16 GB caching layer using Memcache to allow them to scale horizontally. Still, many queries involving complex joins were too slow. In response, the Twitter team started storing denormalized versions of the data for faster access. In a another instance, Twitter found its use of DRb, a mechanism for remote method invocation (RMI), had created a fragile single point of failure. It replaced DRb with Starling, a distributed messaging queue that gave it looser coupling of message producers and consumers, and better fault tolerance.
> It is of no small significance that Twitter’s engineers chose to absolve Rails of being at fault for their problems; instead of offloading the blame to an external factor, they chose to take responsibility for their own design decisions. In fact, this was a wise choice. Twitter’s engineers knew that reimplementing the same architecture in a different language would have led to the same result of site outages and site sluggishness. But online rumor mills were abuzz with hints that Twitter was planning to dump Ruby and Rails as a platform. Twitter’s cofounder, Evan Williams, posted a tweet (shown in Figure 1 ) to assure everyone that Twitter had “no plans to abandon RoR.”
It's not that every chapter should open up with a jaunty Malcolm Gladwell-seque tale about the life of loves of professional development. But some of your assertions could be made more compelled with some real-world examples:
> As a student of computer science and programming, you’ve learned a significant portion of what you need to know as a rookie professional. The most difficult parts perhaps, but far from the “whole enchilada.”
There's nothing wrong with that statement. But there's not much to it besides filler that students have been told for their entire college education. You yourself must have a few personal examples of what the first week of work taught you that 4 years of college didn't. And/or, you may remember a few interns who, despite their college pedigree, found themselves to be completely over their heads. Just even a couple of sentences of showing how you came to learn the wisdom you now dispense goes a long way.
Anyway, sorry for the extended critique. I am obviously skipping over the part above to how damn hard it can be to find compelling stories :)
Keep in mind though I'm not a professional writer, far from it, a hack basically. It took untold suffering to get here, where "there's nothing wrong with this statement." Because believe me there were ten wrongs and ten revisions beforehand to get to this point.
That said, I do have a number of stories included in the text, unfortunately under five. I will keep a todo item to include more, but honestly I'll never be able to produce suspenseful text like the above. Maybe if it makes some money I'll be able to hire a pro co-author.
people will see 'Agile' or 'sprint' and cringe.
Agile has been so oversold, theres a big backlash against it coming....
I'll be very surprised if we actually go back to merging, integrating, testing, and releasing code once every few months/years instead of hours/days.
And many firms do continuous delivery for very critical products and services without using Agile nor anything even remotely like Agile.
1. Customer satisfaction by early and continuous delivery of valuable software
3. Working software is delivered frequently (weeks rather than months)
4. Close, daily cooperation between business people and developers
5. Working software is the principal measure of progress
In fact, the only parts of it orthogonal to continuous delivery are
5. Projects are built around motivated individuals, who should be trusted
6. Face-to-face conversation is the best form of communication (co-location)
9. Continuous attention to technical excellence and good design
Simplicity—the art of maximizing the amount of work not done—is essential
Best architectures, requirements, and designs emerge from self-organizing teams
12. Regularly, the team reflects on how to become more effective, and adjusts accordingly
And aside from (maybe) embracing remote work, I don't see those things going away anytime soon. I certainly wouldn't want to work somewhere that rejects them.
We could stand to lose the name, but probably not the ideas.
What if the client asks you to give them a product once per year. Does Agile recommend telling them no, turning down their money, and reply, "Sorry, but Agile says I have to delivery the product continuously."
The two words "continuous delivery" mean to deliver something in such a way that the customer doesn't experience gaps between the release of improvements, upgrades, fixes, additions, or desired changes, so that they are not staccato changes at major discrete instances.
Crucially, the customer, not you, gets to decide what "staccato changes" and "major discrete instances" means to them.
If the customer says to you, "Receiving these changes any faster than once a year does not help me" then "continuous delivery" for you, in that case, does not mean the same thing as the modern buzzword ideology of continuous delivery.
Nonetheless, you could still use an Agile process in that scenario. I wouldn't recommend it though.
If a client wants to invent their own meanings for words which diverge significantly from those generally accepted in the community, I'm going to be extremely concerned about our ability to communicate effectively, and take a hard look at whether the risk/overheard of the minefield lurking in our vocabularies is worth the money.
If a customer thinks shipping once a year is "agile continuous delivery" they are wrong, just like if a car on the freeway thinks "35mph is fast enough" he is wrong. I mean, for him, sure, but not when attempting to interact with others cooperatively.
Though I'd probably humor anyone's belief of anything for enough money. And while I'd certainly prefer to work on a project that's actually doing agile, I don't doubt that under some circumstances it's more appropriate to do a traditional waterfall (and call it that).
Not every problem is decomposable that way, and one of the major failure modes of Agile is when you see people trying to shoehorn problems that can't be decomposed like that down into two week sprints.
> If a customer thinks shipping once a year is "agile continuous delivery" they are wrong, just like if a car on the freeway thinks "35mph is fast enough" he is wrong. I mean, for him, sure, but not when attempting to interact with others cooperatively.
Not all cars are on freeways. This analogy borders on absurd.
There are many businesses where infrequent software updates make tons of sense. For example, if you work with field deployed hardware that is not connected to a network. I worked with hardware like that in some defense situations before.
Submitting updates to the actual devices made no sense whatsoever except on an infrequent basis. The devices could not be connected to the internet for security reasons.
If you delivered software to a firm like that, and took the cocksure attitude that you seem to know better than the customer, you'd rightfully lose their business for reasons of poor software practices.
Man, the dogma of Agile is just so frustrating. It really gets me down.
Then these projects are poor fits for Agile, and the appropriate solution is to use something else.
>There are many businesses where infrequent software updates make tons of sense. For example, if you work with field deployed hardware that is not connected to a network. I worked with hardware like that in some defense situations before.
Great! Then these are appropriate places to not use Agile.
That doesn't mean you can define Agile to be "whatever is most appropriate in this situation." It's a tool in the toolbox, and sometimes it's the wrong one, but when you do reach for something else you owe your fellow craftsmen the courtesy of calling it by the correct name.
This is incorrect. You can use Agile for these problems, some organizations already do, and they do not violate any principle regarding continuous delivery by using Agile in these situations.
To be clear though, I feel Agile (or any fixed, one-size-fits-all prescriptive methodology for that matter) is always a bad choice, for any project.
However, trying to act like the words "continuous delivery" have a fixed, unchangeable meaning that never varies by the context of customer delivery targets is simply and unequivocally incorrect.
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
All of those strongly point towards continuous delivery being part of agile. Continuous delivery is also a given in XP, one of (if not the) founding agile methodology.
I think you need to dilute agile quite a lot to release once a year, although I daresay you can.
It's amusing to me that the first thing people thought to do was go look up the letter of the Agile law, as if that could possibly have any bearing on this. Such a strong indication of what an empty cult Agile really is.
How would you define agile?
From your other comment, it seems like you're defining it as whatever is appropriate for the project. I don't disagree with that sentiment at all, but it does make the word rather pointless.
It's defined by whatever practices emerge through its usage. Which ends up being a big stew of political dysfunction, time-wasting meetings, and pointless metrics.
I feel it is a classic No True Scotsman fallacy to say that "any real agile implementation" does this or that, but all of these "false" Agile implementations lead to the dysfunction.
Try Ch. 7 - Models and Methodology, to skip to Agile.
Of course, as we all know providing estimates during the requirements phase is very difficult, especially if they're treated as hard commitments rather than rough ballparks. Chapter 2 mentions that getting requirements wrong is a key factor in causing software projects to fail; I'd say that the reason for that is usually because of the implementation costs of the known requirements that were estimated inaccurately or not at all, and the implementation costs of requirements that aren't discovered until later. It always comes down to cost; I think it's relatively rare for a software project to fail because a requirement turned out to be impossible to implement. (Unless it involves AI. You always have to watch for people trying to sneak in an AI requirement.)
It was a real eye opener for me- I'd been writing code in many languages since I was 10, but this was my first glimpse at "that other stuff" that takes up the majority of one's day.
All in all, I feel like it prepared me for what I would face in the real world. We had to do stakeholder interviews where the professor or a TA played the role of unforthcoming/neurotic stakeholder, were introduced to various general document types like stakeholder analysis, cost/benefit analysis, requirements overviews, etc. and the last 1/3 was pretty much applying all the interviews and data to an estimation process. We also did a greenfield project, an additional functionality project, and a system replacement project to work through the pitfalls of each.
I also think it was the first time I read The Mythical Man Month and Waltzing with Bears.
The two things, by far, that stick out about recent college grads (or really, new developers in general) are the inability to estimate and gather requirements and a complete lack of knowledge around source control.
GitHub and a lot of projects that are based around checking project source out has made a lot of Junior devs more familiar with at least the idea of source control, but very little prepares them for the flailing and hand wringing that comes with estimation.
The "Models & Methodologies" chapters looks great. It may become my new "here, read this!" when people ask me "what's agile?" or "how else?"
Or again the Wikipedia page on the history of software engineering, which is frightfully inadequate. https://plus.google.com/+LaurentBossavit/posts/gpSwoWn4CBK
I'll add my voice to those that have already stated such a book shouldn't start by assuming the SDLC as a reference model: it embodies too many of those outdated assumptions. More in that vein in my own book http://leanpub.com/leprechauns
Instead there's Agile and the idea that we can throw together something that roughly works and iterate until our confidence is enough such that we can release it. The so-called, beta-driven-development. (Perhaps a vestigial remnant of the unix philosophy?).
I'm not arguing that formal methods should be used for every software project. I think Carmack was right to point out that if all software was written like the software at JPL we'd be decades behind where we are now. However I do think that it should be a part of the experience of becoming a programmer so that when we encounter hard problems we have the correct instincts to rely on mathematics to help us.
I found it a shame that I didn't encounter these tools only until very recently. It's a well kept secret in academia that I think should be shared.
Do you believe the larger point (about cost of defects) is incorrect? Your g+ critique seems to take issue with the details of the study, but I missed an assertion it is wrong.
As to the wikipedia timeline, I cherry picked from it. The point being to pick the important things a student should know, not list everything possible. Seems your WP edits didn't make it through?
If someone made quantified claims about "the number of minutes of life lost to smoking one cigarette" I would refuse to take them seriously: I would argue that the health risk from smoking is more complex than that and can't be reduced to such a linear calculation.
This talk about "the cost of a defect" has the same characteristics. I don't mean the above argument by analogy to be convincing in and of itself, and I've written more extensively about my thinking e.g. here: https://plus.google.com/u/1/+LaurentBossavit/posts/8tB2RQoHQ...
But it's a large topic that quite possibly deserve a book of its own.
As for the history of software engineering, it's pretty much the same - to do it properly would entail writing a book, pretty much, and I didn't want to do it on WP unless I could do it properly.
Source: Adapted from “Design and Code Inspections to Reduce Errors in Program Development” (Fagan 1976), Software Defect Removal
(Dunn 1984), “Software Process Improvement at Hughes Aircraft” (Humphrey, Snyder, and Willis 1991), “Calculating the Return on
Investment from More Effective Requirements Management” (Leffingwell 1997), “Hughes Aircraft’s Widespread Deployment of a
Continuously Improving Software Process” (Willis et al. 1998), “An Economic Release Decision Model: Insights into Software Project
Management” (Grady 1999), “What We Have Learned About Fighting Defects” (Shull et al. 2002), and Balancing Agility and Discipline:
A Guide for the Perplexed (Boehm and Turner 2004).
Joel on Software also has a convincing narrative on the subject. Therefore I'm not in a big hurry to replace the image, though I will put it on my todo-list.
For just one example, here's my treatment of Grady: http://lesswrong.com/lw/9sv/diseased_disciplines_the_strange...
It's not just me. Here's another author of a book aimed at software professionals who attempted some fact-checking, and came up short: http://www.sicpers.info/2012/09/an-apology-to-readers-of-tes...
I, too, used to argue for practices such as test-driven development, based on the supposedly firm knowledge of the "cost of defects curve". I changed my mind about the cost of defects when I saw how poor the data was. This is me in 2010: http://lesswrong.com/lw/2rc/coding_rationally_test_driven_de... and this is me two years later, recanting: http://lesswrong.com/lw/2rc/coding_rationally_test_driven_de...
However, I haven't (entirely) changed my mind about TDD and similar practices. I do still believe it pays to strive to write only excellent code that is easy to reason about. I like to think that I now have stronger and better thought out reasons to believe that.
So now I'm a bit confused on how to proceed.
Edit: It's as mixmastamyk says. I created a repo called 'achshar.bitbucket.org' and it works.
SSL, custom domains, static generator auto-builds, etc.
disclaimer: co-founder of Aerobatic
It is a paid offering, though you get two repositories free, and is very pretty reasonably priced beyond that.
Oh ok, it needs to be the entire url 'user.bitbucket.org'.
Obviously this wouldn't be used to teach anyone any particular topic in detail but to get them familiar with the general concepts/steps involved in software dev.
The two books I read that I thought covered these ideas well were Code Complete and Code Craft. But it's been about a decade now. Perhaps they're too dated.
Thanks, interesting. I've tried to define difficult terms, and it is aimed at a technical audience, but there is definitely room for improvement. If there are any readers having trouble, I'd appreciate hearing where. Will take a look myself as well.
I'm not trying to knock you here as I'm not exactly the audience for a publication of this type anyway.
It does give a good idea about what you should know/be familiar with and a bunch of link outs to other sources on the web.
(More information is at the bottom of the intro/title page under Acknowledgements).
That's what made that album cover so great.
Dame Jocelyn Bell Burnell on BBC radio's Life Scientific http://www.bbc.co.uk/programmes/b016812j
Read first several chapters and picked up the impression that the author doesn't put enough effort to point out how the processes/practices can (and should) be completely different depending on the circumstances. If a startup tries to use the same processes as Google or Facebook, it'll be dead in the water. If SpaceX engineers write software the same way as SnapChat does it, we will never see their rockets leaving launch pads.
1. Anything that uses the word 'protip' can not be taken seriously. I think this needs a law. 'The Law Of Silly Programming Memes - Anything using the word 'protip' cannot be taken seriously'
2. The 800px width format in the world of responsive design gives me pause. Basically this says 'I'm for mobile - screw you'. I would hope that a better format for this would be chosen in the future.
Also thought it was recognized that narrow columns are easier to read, such as in a newspaper. It uses the well-regarded "read the docs" theme. Maybe zoom would help?
Note: An academic recently combined Cleanroom with Python for some nice results given how high-level Python is. I thought Haskell would be more ideal.
Note: Describes Fagan process with relevant links.
Note: Altran/Praxis Correct by Construction is a modern high-assurance method with numerous successes. Cost a 50% premium for nearly defect-free systems. SPARK Ada is GPL these days.
Note: Margaret Hamilton, who helped invent software engineering on Apollo mission, deserves mention for the first tool that automated most of software process. You can spec a whole system... one company specified their whole factory haha... then it semi-automates design then automatically does code, testing, portability, requirements traces, and so on. Guarantees no interface errors, which are 80+% of software faults. Today's tools have better notations & performance but still can't do all that for general-purpose systems: always a niche.
Note: Added Eiffel method to make up for fact that I have little to nothing on OOP given I don't use OOP. Meyer et al get credit for a powerful combo of language features and methodology in Eiffel platform with huge impact on software. Specifically, Design-by-Contract has so many benefits that even SPARK and Ada both added it to their languages. Just knocks out all kinds of problems plus can support automated generation of tests and such.
So, there's you some reading on methods of making robust software that might fit into your book or something else you do. :)
Thanks a lot for sharing this, by the way
Note: After a big recall, the hardware field is the one exception where they have all kinds of formal verification and testing. They're big on that stuff. Not same tools as software, though, for the most part.
Far as those using it, it helps to look at what products are available and who vendors say are their customers. Look at high-assurnace plus medium as many former customers of high-assurance do medium these days due to above reasons. Even most vendors in the niche are saying "F* it..." since demand is so low. So, you get especially high-security defense, a few in private security, some banking, aerospace, trains/railways, medical, critical industrial (esp factories or SCADA), firms protecting sensitive I.P., and some randoms. The suppliers are usually defense contractors (BAE's XTS-400, Rockwell-Collins AAMP7G); small teams in some big companies (eg IBM Caernarvon, Microsoft VerveOS); and small firms differentiating with quality & security (Altran/Praxis, Galois, Sentinel HYDRA, Secure64's SourceT).
Here's some examples. Some have marketing teams in overdrive. Just ignore it for use-cases, customers, and technical aspects. ;) Altran comes first as they focus on high quality or effectiveness for premium pay, with some high-assurance. Probably a model company for this sort of thing. AdaCore lists lots of examples which are actual customers. Esterel has a DSL with certified, code generator that has plenty uptake. INTEGRITY links show common industries & specific solutions that keep popping up on security side. NonStop is highly-assured for availability with reading materials probably having customer info. Last one is a railway showing B-method, most common in that domain, doing its job. Hope this list is helpful. I can't do much better in a hurry since the field is so scattered and with little self-reporting.