It depends on the situation, of course. Sometimes you have to "move fast and (risk that you might) break things". I think the "compleat" programmer should be able to work anywhere on the spectrum from fast to careful, depending on the task. But I certainly spend most of my time on the careful end, and I generally prefer to work with others who do too.
I used to work in financial software, where things were very different, and before that I'd interned at a company that did avionics software, which is a whole other ballgame. I just find it interesting that the definition of "bug" can vary so greatly across domains in what's commonly thought of as a single industry.
1: Just because the specification doesn't specifically state "shouldn't crash if the user inputs a negative number", it's understood that the program shouldn't crash given any user input.
I agree -- different kinds of programming impose very different demands.
Waterfall is a theoretical idea that sounds nice, but fails miserably in practice - the key problem being the assumption that if you are diligent enough at collecting the requirements and constraints, you can resolve them at the design and implementation stage. The reality is that there is never enough time/information/budget/knowledge/resources to map the requirements and constraints, and furthermore, those change all the time so even in the unlikely case that you have managed to map the territory, by the time you deliver the solution, the territory has changed.
One can argue theoretically for waterfall as much as they want (and in fact some people do), but the collective experience with waterfall is so abysmal that whoever wants to waterfall needs to argue against the failures, rather than for the method.
For that matter, I can hardly think of large scale software projects that succeeded in the sense that they were on time, on budget, and satisfied the requirements a-posteriori, regardless of methodology.
Regardless, how many projects that people here are involved in are really large scale ?
(Note: This is not a "no true scottsman" argument. And I honestly think agile is only a good match for some projects, and not e.g. infrastructure. Nevertheless, this does not sound like the "agile" I know).
Agile, done right, -can- (and I'd argue, -should-) be slow, in the context of what this is talking about. 'Slow' doesn't mean delaying working on coding until all requirements are defined (i.e., waterfall), it means once you start working, to make sure you've thought about it, to make sure you're careful in your implementation, that you pause and think about what you're doing, don't just breeze over edge cases or possible error conditions.
You can think slow or act fast even given one requirement. Say I have a requirement, "I need a REST endpoint that gives me X information".
The fast approach is to just say "Okay, this is getting information; that's a GET. We've already been writing this in this language and framework, let me just stick a new endpoint in the router file that gets that info and returns it".
The slow approach is to say all of that, and then follow it with "Okay. What happens if this information isn't there? Can that happen? If something goes wrong, do I need to return a sensible error message along with the HTTP error code? Does the consumer understand a specific kind of error message, such that it will be displayed to the user? As part of getting X, are there any additional parameters that we need; i.e., is it enough to get -all- Xs for the call, or is the size of it likely to be enough to warrant getting just Xs that match (filtering criteria)? Does the DB need to be optimized for that sort of query?" Note that agile -does- mean you probably don't address all of those as part of this story; KISS, YAGNI, etc, still apply. But taking it slow means you -think- about those things.
What's the difference, if it doesn't lead to any change? Well, thinking about it means you likely cause any failures to fail gracefully, in a well defined, recognizable, predictable manner. It also likely means you've brought up those issues, and you can determine whether they should be tackled at all; you might be able to do them as part of this story if they're small, or add some hooks such that it'll be easy to add them later, but it might also lead to creating new stories, that get prioritized into the next sprint.
Now, while people argue you need to move fast and break things sometimes, I'd argue that taking things 'slow' like that -is always better-. Why? Because if you take things slow, you can still choose not to do something. The overhead of just thinking about it, and then actively choosing to take the fastest path, is very, very small compared to just rushing in to do the fastest path. Maybe 10% extra or so. But as often as not, taking it slow will lead to you coming up with additional work that you really need to do; without it your app will fall over, or your users will get pissed off, or whatever. While that then feels slower (you are, after all, doing more), it's getting things done that -need to be done-. If you'd taken the fast approach, you'd realize you need them later, but in hindsight would justify moving fast because "well, we didn't know that then, and we got features out the door". Sure, you didn't know that then because you -didn't pause to think about it-. You can't claim that you couldn't have predicted it, and moved to avoid it, because you never tried to predict it. If you -did- spend some time thinking, and fail to catch something, then you really couldn't have predicted it, because you -tried- to. You've optimized for the best possible path as you can understand it at the time; taking the fastest path may or may not land you there, and you can't know whether it was optimal.
In short, moving fast means defaulting to not doing the extra work; moving slow means thinking, and actively -deciding- what level of work needs to be done. You can be nearly as fast as 'moving fast' (by thinking about it and deciding you don't want to do the extra stuff), but you can also decide that certain tasks really need to be done, and that, while making you slower, leads to a better product.
And it's "no bug was found that anyone knew of". Not a bad standard, but latent defects frequently have a long shelf life.
They do, but it has also been observed that code in which no bugs have been found is less likely to harbor them than code in which some bugs have already been found and fixed (other things being equal, of course).
If I had had to bet where in the whole codebase the next bug would turn up, I certainly would not have bet it would be in this guy's code.
"This article reports on an effort to explore the differences
between two approaches to intuition and expertise that are
often viewed as conflicting: heuristics and biases (HB) and
naturalistic decision making (NDM). Starting from the
obvious fact that professional intuition is sometimes marvelous
and sometimes flawed, the authors attempt to map
the boundary conditions that separate true intuitive skill
from overconfident and biased impressions. They conclude
that evaluating the likely quality of an intuitive judgment
requires an assessment of the predictability of the environment
in which the judgment is made and of the individual’s
opportunity to learn the regularities of that environment.
Subjective experience is not a reliable indicator of judgment
They agree on almost every point, they disagree on the usefulness of checklists on low-validity environments and on "whether there’s more to be gained by listening to intuitions or by stifling them until you have a chance to get all the information." 
 2009, Conditions for intuitive expertise: a failure to disagree., Kahneman, Klein. http://www.fiddlemath.net/stuff/conditions-for-intuitive-exp...
 Strategic decisions: When can you trust your gut? http://www.mckinsey.com/insights/strategy/strategic_decision...
I've had success with forcing them to use pencil and paper, or a whiteboard, to design their algorithm and writing the code, before ever touching a computer to test it. This is a way to force them into the "slow thinking" mode, because there is no immediate feedback and they'll have to think more carefully about it. Put them in front of an IDE, however, and they just get tempted into writing code without thinking much about what it's doing.
In an ideal world, the jiggling around would be followed by a review of the relevant codebase and a good refactoring, but this step is pretty universally missed in my experience. Additionally, the thinking should be occurring in the "build the tests" phase, but it's not. Or rather, only the "happy path" tests are built; the corner cases and error cases are missed or outright skipped.
I noticed this in myself first, and slowing down has definitely helped. As has spelling out the test cases before touching any code.
I think everyone hits this; to avoid it you have to have a perfect understanding of the domain a priori. The difference I see now, compared to when I was starting out, is I recognize I'm doing it more quickly. Once I do, I will re-examine the domain from the perspective of "it includes all these edge cases", and generate a new abstraction that encompasses more (hopefully all) of them.
Uh, no. I'm pretty sure everyone learned the apparent lesson that "slow-and-steady wins the race". Of course, that lesson is seriously incorrect since almost every race is won by the fastest. The real lesson is that pride goes before a fall. That is not what the article is suggesting.
Daniel Kahneman, the only non-economist to be awarded the Nobel Prize in economics.
Uh, no. John Nash, the only economics "Nobel prize" winner most people could actually name, is also not an economist.
And then the article goes on to suggest that brain teasers and their ability to tease brains--even MIT brains--have real-world implications. Okay, whatever.
1) A bat and a ball cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost? ____cents
2) If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets? ____minutes
3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? ____days
How did you do? (see article for answers)
I find I fall over far more on things that hit that trigger needlessly, rather than the other way around (problems that look like I need to think about, but where the intuitive result is the correct one).
All things considered, I'd rather have the problem that way around and over-think and lose time needlessly than get the wrong answer more often. However, I'm biased to wanting that to be the best way, so I'm unreliable as to saying to what degree it's a "better" way to think.
The interesting takeaway for me is that environment probably matters. If I take a test like this in a very relaxed environment I'm probably slowing down automatically.
It's the same way you build expertise in any domain: proceed with your initial hypothesis, but then once experience proves you wrong, build a new, more accurate mental model. That new model will give right answers in a greater percentage of cases, but inevitably it's wrong too, and so you repeat the process.
I quickly came up with the incorrect answers for #1 and #2, but was able to realize that something was wrong with them and thought about it for a few more seconds and got the right answers.
I had the right answer with "fast" thinking for #3 though.
Note: When I say quickly & fast, I mean in less than a second, just the impulse answer. That is what fast thinking is I believe, but correct me if I am wrong.
I think the important skill is to be on the lookout, in your fast thinking, for problems which are slightly off. This is a clue that the pattern doesn't fit, and you should revert to slow thinking.
Without a clear question, my brain refused to give an answer, so I got it wrong. It annoys me that it is presented without enough information and then you're told, "Your intuition is not as good as you think it is."
1 and 3 were fun though.
2. Got this right in a few moments by thinking about it (no math).
3. Instantly knew the answer thanks to having watched "the most important video you'll ever see": https://www.youtube.com/watch?v=F-QA2rkpBSY
Speed as a habit seems like it would ruin the ability to slow-think. Is there a dichotomy here or can we do both?
"slow" thinking requires checking your immediate gut reaction and taking a moment to think through the problem.
"fast" decision making requires taking a moment to think about how much time it is really worth to make a decision.
Arguably making "fast" decisions requires thinking "slow" about the length of time you should spend making the decision.
If you have the impulse to go fast, think about prototyping whatever it is you are trying to do, if you can. You should agree with yourself to dispose of the prototype completely. By the time you've gone through debugging the prototype you'll know much of what is needed to do the real thing.
Obviously, this doesn't always work. In those cases, whiteboards, pen and paper, even a text document can help.
Shouts out to A.E. for riding in that 6-0
Move fast, stick slow, think fast, talk slow
Speed is important. Patience is important. Stepping back and smelling the flowers is important. It's not about fast or slow. It's about right, and you can't be right all the time if you're stuck in the habit of zooming along all the time.
I remember how many _years_ it took Apple to figure out how to do right-click properly while their competitors were speeding along with that silly second button.
I'm sure it's great once you learn it and get used to it, but not exactly intuitive.
John Nash (mathematician) also won the econ Nobel
These three are off the top of my head, I'm sure there are others as well. Also, it's not a real Nobel - it's a later addition (although it is awarded by the same committee at the same ceremony).
How can we make slow thinking less forgetful?
I've noticed that when I take the time to think slowly through problems, although the initial activation energy is higher, that I end up saving time in the long run. Also, I tend to end up with a higher percentage of correct answers. This trend tends to increase with the complexity of the problem.
Yes, you probably lose some time on the easy problems, but I look at it as a method of gaining net time over days, weeks, etc.
This, unfortunately, is tough to stick with. It's tempting to skip the easy steps along the way (and let our System 1 do all the work, leaving System 2 to collect dust).
 - This is still a working hypothesis that I will undoubtedly go back and forth on for the years to come.
The intuitive answer to the first problem is $.10 but if you take time to double check and add a dollar to that, you see it's wrong.
I don't think this is too different from programming... the time you take to re-read some code you just wrote and think like a compiler and mentally execute the edge cases, etc.
It's ok to do "fast thinking" if you verify, and go back and think through it properly if you fail (and hopefully over time learn to recognise what type of problems you know well enough for them to be worth making a "fast" attempt at first vs the ones you should just think through slowly from the outset.
This was the most significant bit for me. Getting their thinking and behavioral pattern to change and creating a social enforcement around it. Inarguably, this is what school should be about; because so many kids do not get the right kind of nurturing they need at home, and lead destructive broken lives as a result, only perpetuating things further in their own children, and so on.
His book is very thorough in defining these systems, how they interact, and what situations each are a good fit for.
Fast thinking is important for low cost, instantaneous decisions. It's important so that your type 2 system isn't engaged all day, because that system consumes more energy and takes more effort to kick in.
The problem is that erroneous heuristics are frequently encoded in type 1. That doesn't mean it's useless -- there is an intuitive expertise that can be developed by correctly providing feedback to your type 1.
Try not to think of it like "good" versus "bad" thinking. Type I is incredibly powerful, but has all these gotchas.
The way I think of it is type 1 is like an associative or probabilistic cache. Lookups aren't always perfect -- sometimes type 1 answers an easier question rather than the one it asked -- but the idea is that you wouldn't be able to function if every single decision or interpretation had to be run through type 2.
In other words, type 1 is a shortcut to prevent your type 2 from being overloaded.
The race towards performance doesn't really exist in nature, it's mostly made up, and only works well for sport entertainments. There really exists a culture of performance which is silly.
In programming, I think of myself as a slow thinker, but quite fast programmer. I very often find myself writing some code, re-reading it and throwing it away, because I realized it doesn't really work the way it should work. I do the same things with emails. Whenever I write a long email, the first version is most likely going to be discarded, but it still helps me organize my thoughts.
When solving a problem, always make sure you understand the root cause of it. Don't be satisfied with fixing the symptoms. If things don't work as you expect, make sure you completely understand why it is so.
Eventually, it becomes a habit to think twice before committing to something. You might still experiment, draw things on a whiteboard, do whatever helps you think, but you should not consider the side-products of that process to be the result.