Here's mine: to a first approximation, code quality is unimportant. Rapid iteration is what matters.
I've seen lots of failed projects with smart, super-conscientious developers, who wouldn't write a method without a test. And others with programmers that frequently copied code, used globals for side effects, basically did everything they tell you not to do, but they produced programs that people loved using.
The difference was rapid iteration with real customers. This flushed out important bugs, even with code that was written in a way that was prone to be buggy. But more importantly, the developers quickly understood the product in a more global sense -- what was relevant to users and what wasn't. I suspect rapid iteration can turn even mediocre developers into stars.
It's important to recognize that good practices are about tradeoffs. The truly good developer is not the one that never copies code; it's the one that knows when it is the more fruitful course of action. So in a way, you need to be brilliant to get away with being sloppy.
i think some developers put too much stock in unit testing and it gives them (and their managers, customers, and co-workers) a false sense of security. they end up wasting time and not really preventing as many bugs as they think it will. application logic changes and they have to write twice as much code to change the application and all of the tests that depended on that logic.
writing a regression test for a complicated routine or library is one thing, but writing a test for every method of a class just for the warm fuzzy of their test framework telling them they have 100% coverage is silly.
Unit tests may be overrated, but writing them certainly isn't.
I've seen so many programmers start writing so much better code after adding the criterium "it has to be testable".
BTW, the killer bugs are usually not in the "complicated routine". They are more often in the simple things programmers like to think they can do so perfectly that they don't need to write tests for it. I have found many a bug in the most simple one line functions I never bothered to test.
I said this to my coworker yesterday: the funny thing about code is that the more testable it is, the less it actually needs to be tested. (I specifically meant unit tested, because well factored code is easier to reason about, etc.)
I totally agree, who tests everything has just missed the point. I write tests for critical parts of a software that have an high risk to fail somehow (a DAO or Billing implementation, for example).
It's not all or nothing proposition as your opening sentence perceive to be.
Just wait until you need to actually write an automation test and dealt with code/developers who happen to say "well, it's hard to do that now since we have tightly written code".
Sometimes it feels like unit tests are simply there to replicate the mechanisms of a strongly typed language. Makes you wonder why they chose a dynamic language in the first place...
The top items don't seem to be very controversial, but instead seem to be "things that good programmers think are true, and bad ones don't."
Here's mine: if you haven't gone through a respectable 4 year CS program, odds are I'm not going to want to hire you since your brain isn't wired to think about algorithms and data structures instead of code and servers.
Surely there's a more direct way to determine how someone's brain is wired? Have a conversation with them about some algorithm or data structure. A four-year CS program is definitely not the only way to learn that sort of thing.
You cannot do object-oriented programming in Java.
What the first calls OOP, the second calls "class-oriented programming". What the second calls proper OOP, I think is more along the lines of what the first advocates.
It seems like what he is arguing for (a methodology that starts with algorithms and generalizes an interface) is what type classes (at least in Haskell) offer.
You could say the same for the comments right here.
It's not often you see comment sections with that dynamic. One is prompted to respond with something that will essentially relegate them to the bottom of the page.
You may choose to support a two-point comment using an upvote, perhaps only to see it voted back down a minute later. Controversy in plain sight?
Just a few years ago Javascript had a horrible reputation. It's funny how the more people learn Javascript, the more it is considered as a great language. I guess it's some form of Stockholm syndrome.
I think it's a bit of both, but to claim Javascript is an amazing language is very hard to swallow given all its design flaws, acknowledged by Javascript's creator himself.
Well, for a want of a lisp... In a way, Javascript is a "lisp in C clothes". Purely as a language, though; in practice, this idea has not been followed through.
Debuggers are a limited tool, and developers should seriously consider not using them. In particular debuggers make it easy to miss the forest for the trees, you solve trivial bugs faster but don't notice the underlying design flaws.
Some people made me realise lately that this is actually controversial: If your function is over half screen long and it's not gui initialisation, you're doing it wrong.
Also: One line code comments in front of a block of code should actually be converted into function calls that execute the described functionality. (comment is made redundant)
IMHO, the real debate lies in "just in time learning" vs "just in case learning".[1]
Should one learn the fundamentals first (assembly/C) or the practical first (python)? The answer will depend from person to person. I personally took the practical first approach.
It's 2010 and we're still running a lot of native code with no OS-enforced sanity checks on pointer arithmetic or types or capability tokens. "Crash" and "boot" are "virus" are phenomena the general public have become familiar with. Someday we're going to look back on all this as grotesquely stupid, even negligent.
(But I can't see a path from here to there. Especially one that involves getting paid to write an OS worth using.)
Or if you're worried about the licensing of Singularity, Singh and others, check out MOSA (http://www.mosa-project.org/) which tried to bring back SharpOS and EnsembleOS together (both inactive now unless something changed lately), or jNode (or many others I missed probably). They should be all in a runnable state (even if very limited).
There's been a lot of this stuff happening around. There are many paths - which unfortunately cannot include many of the existing native (unsafe) solutions.
Open source projects are devaluing software developer wages in the industry.
Many companies aren't going to need educated software engineers anymore because more and more software apps and code are given out for free. They will still need developers to make custom changes, but at less pay and less education (think mechanic instead of engineer). It will make it that much easer to outsource your job to India.
Editing one big file is easier than editing many small files. Hence, have a tool to break your "one big file" into small "section" files if needed but stick with one big file as long as your editor can stand the size.
This is so "controversial" that some nice fellow here on HN both called it "crazy" and up voted it because of that.
My current "one big file" is 16555 lines of code long (javascript/nodejs). GVim handles it perfectly.
To expand on this, I'm not saying that I think alot of programmers are idiots. (I realize now it could be taken that way)
What I mean is that as an industry, we're in the stone ages. What we're doing will be looked back upon the same way that look back at bloodletting that was practiced by doctors a few hundred years ago.
Just when I think I'm beginning to get a grip on things, a whole new layer opens up, and I start over as a beginner again. I've heard anecdotal evidence that programmers who have been around 30 or 40 years still experience this.
I wrote a blog post awhile back that was really controversial. It was about code ownership. People were split down the middle on hating code ownership or loving it. I think some people even took it to mean actual owning of the code.
I was coming from kind of a dictator view where I accept patches rather than blindingly letting people commit changes to projects i'm responsible for.
It is completely unprofessional to implement a solution you've been asked for, if that solution is bad practice.
That is to say, if you're asked to make a hack that will cause pain and misery, has a security risk attached, won't be performant or other clear, obvious risks, it is unprofessional to not object and in some cases, refuse.
How is this controversial? Anybody who's had to sanity-check function arguments (e.g. null pointer checking) knows that having multiple exits from a function is sometimes the Right Thing.
I was told in school that SESE was the proper way to do things. I've met other people who agreed with that. You and I are of a differing opinion. Seems at least somewhat controversial to me.
There's too much fragmentation among programming languages. For instance, I think Ruby, Python, Perl and PHP all solve the same set of problems. One must win for the greater good. The benefits of unifying the developer community outweigh the benefits of diversity. There I said it :)
They're just the BASIC equivalent of jumps in assembler. GOSUBS are just calls. If they're really so bad, why are their assembler equivalents used in almost every program in every architecture?
I've seen lots of failed projects with smart, super-conscientious developers, who wouldn't write a method without a test. And others with programmers that frequently copied code, used globals for side effects, basically did everything they tell you not to do, but they produced programs that people loved using.
The difference was rapid iteration with real customers. This flushed out important bugs, even with code that was written in a way that was prone to be buggy. But more importantly, the developers quickly understood the product in a more global sense -- what was relevant to users and what wasn't. I suspect rapid iteration can turn even mediocre developers into stars.
It's important to recognize that good practices are about tradeoffs. The truly good developer is not the one that never copies code; it's the one that knows when it is the more fruitful course of action. So in a way, you need to be brilliant to get away with being sloppy.