This is an illuminating example of how different assumptions lead to different definitions of what makes good code. I'm convinced that this is one reason why there is this belief that 80% of programmers are incompetent. It's not there aren't truly incompetent devs out there, but it's also easy to assume someone is incompetent just because they were working from different constraints and experiences than you, and the way they did things just isn't immediately apparent.
I have often been complemented on how well I code, but I don't think I stand up to many peoples moderate questioning all that well. I think this seems from looking for a clean solution to most problems vs. the perfect or most obvious solution. The vast majority of the time I avoid dealing with the minutia that many people feel is necessary.
For example, I once had school project where we where supposed to implement some fairly simple math function in x86 assembler. My first question in class was what instruction set could we use? (x86 Pentium I) So we can use the floating point unit? (Yep) So while everyone else was implementing floating point functions by hand I simply used the built in floating point unit. I still remember handing in 4 pages of clean code and asked why this was a major project until people started asking for more time to fix their buggy 40+ page programs even though I had told several people what I thought was an easier solution. I even told people that I finished the thing in a few hours while watching TV but they forged ahead I guess thought the sunk cost fallacy or something. I mean sure the way you do arbitrary exponentiation is a little funky, but that's better than doing it by hand.
Oven and over I see the same things. Last week someone was demoing his custom cashing solution created by hand and I was like A this does not fit our architecture and B why did you spend the time on this? Sure it's 'fun' but why bother?
At the same time in interviews I have been asked minutia about the java object system etc, and all I can think of is if this matters your doing something horribly wrong.
Hmm, I interviewed recently at a pretty well-known startup and crashed and burned because I didn't know uncommon HTTP codes and request/response headers off the top of my head. I aced most of the rest of the questions, and I think I got along with everyone pretty nicely.
I understand that the interviewer probably works with them every day and has them memorized, but really that's something I could have looked up in 10 seconds. I don't think people should be penalized for following Einstein's "never memorize what you can easily look up in a book" philosophy.
In sport psychology, Hull's Drive Theory predicts that introverts perform poorly under pressure. The physiological response from a job interview is as about as close to the pressure of taking the last shot in a basketball game as I've ever experienced, and I've experienced both. Couple this with the fact that most programmers are introverts, and I'm always amazed at the credulity with which people accept these fizzbuzz claims. Is it more likely that 90 percent of experienced programmers can't program, or that something about the evaluation process is broken? I can say that I would never take a programming job where it was critical to do well under performance anxiety inducing conditions.
The evidence overwhelmingly suggests that interviews and interviewers are unsuitable for the task they are used for (i.e. selecting the best candidate). Assuming this applies to programming interviews as well, then your opinions of these programmers (both good and bad) could be just as incorrect as the Israeli army training example given by the psychologist in this article:
"Because our impressions of how well each soldier had performed were generally coherent and clear, our formal predictions were just as definite. We rarely experienced doubts or formed conflicting impressions. We were quite willing to declare: "This one will never make it," "That fellow is mediocre, but he should do OK," or "He will be a star." We felt no need to question our forecasts, moderate them, or equivocate. If challenged, however, we were prepared to admit: "But of course anything could happen."
We were willing to make that admission because, despite our definite impressions about individual candidates, we knew with certainty that our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we learned how the cadets were doing at the officer training school and could compare our assessments against the opinions of commanders who had been monitoring them for some time. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were not much better than blind guesses."
Another recent article was about the author was discussed here:
Interviewing is an inherently flawed way of determining skill. This manifests itself in numerous ways; one of which is our inability to properly extrapolate information from an interview: if a good programmer needs to search online for information to solve a problem about once each 8-hour day, then during the interview, the good programmer may ask for help once. But if they do that, then the interviewer sees that they need to ask for help once every hour. Alternatively, if they don't ask for help, the interviewer might be led to believe that they don't use tools afforded to them. Because one cannot ask for help 1/8th of a time, the interviewer has a negative impression of the candidate regardless of the choice the interviewee makes. This sort of lose-lose for the interviewer due to extrapolation can happen for many different facets: anything that is necessary, but only in moderation.
Interviews are a very flawed system for identifying skill.
Maybe I'm actually a terrible coder and don't know it, but I search through documentation way more than once a day. Saves a ton of time.
I always thought a great interview would be something like "Here's a dev machine, make me an app that does this. I'll be back in an hour." (coding test), followed by a code review and "Let's go to happy hour." (personality test).
An interview where the interviewer favors people who look at documentation less often is already deeply flawed. In my experience, the best programmers spend the most time reading documentation, and the worst programmers are the ones who never read docs and never ask questions.
I would never want to hire or work with someone who read docs only once a day.
Do I understand you correctly that you're now saying that multiple times per day is acceptable, but d/i times per day, where d is the length of the day and i is the length of the interview, is unacceptable?
This is still nuts. There is literally no amount of documentation checking that is too much, as long as the guy produces results.
It's a good point, and one I expect most people have heard quite a few times. However, interviewees probably either don't have a job or aren't doing well with the one they have. As such, I suspect you're likely to get a larger % of the mediocre than you might see elsewhere in the industry.
I don't know how to estimate this skew, but it's probably substantial, particularly in this industry.
I'm fairly introverted and a competent developer; I've only done two interviews in my 13-year-so-far professional career writing code.
The first was for my first job after college. I only called one company in the city where I planned to move; they seemed good, so I got a job there.
The only reason for the second interview -- last year -- was because I moved to Europe, and decided after 4 years that working on the projects & jobs I could get from my contacts in the US wasn't great, so I'd have to (ugh) meet some new people.
I suspect there are plenty of competent developers who have experiences like mine. We'd rather go work with people we already know, the folks on the hiring side would rather grab someone proven, vs. (on both sides) interviewing with strangers and hoping for the best.
> it's also easy to assume someone is incompetent just because they were working from different constraints and experiences than you, and the way they did things just isn't immediately apparent.
I don't know if it's common, but I tend to judge competence by the bottom line: does it work? is it supportable? what was the cost of getting there?
If it doesn't work, or does work but cannot be supported/modified, and cost a lot, it is a product of incompetence. Possibly the manager's, and not the tech guys.
I've never heard anyone refer to djb as incompetent. Some people dislike his coding style, but I haven't heard any criticism about it that is not about style. And yet, he's working with different constraints and experiences than everyone else. And things are often not immediately apparent.
I thought people generally assumed that 80% of programmers are incompetent because 80% of programmers can't perform a simple task like e.g. Fizz Buzz without an enormous amount of time and assistance.
I hate this coding style. It combines expert system knowledge with spooky action at a distance, which means in anyone else's hands it will break and delay development. Horizontal changes that aren't readily indicated in the code paths that are effected are confusing. (One of the many reasons monkey patching can be considered an anti-pattern)
I guess the longer I've worked with other developers, I prefer readability first, test-ability second.
I love this coding style, but I think you have a valid point. Python decorators are a nice compromise in that they allow you to have cleanly factored 'around' advice (which is strictly more powerful then 'before' and 'after') with the implementation defined elsewhere, but still requiring a visible marker at the location the advice is used.
What would be ideal is a system that allowed you to define advice by monkey-patching but indicated what advice was applied to a method at the site of it's definition. As mentioned in this comment http://news.ycombinator.com/item?id=3246215, I think we are bumping into a limitation of what can be easily managed in "unstructured" (I would say "dead") text files.
I think we are bumping into a limitation of what can be easily managed in "unstructured" (I would say "dead") text files
Yes, I think so too. There are many kinds of relations between entities that we cannot specify just because "dead" text files make them hard to express. Also, it makes that language "wars" focus on shallow concerns such as syntax, instead of semantics.
It would be great to be able to put constraints and relations at the abstract syntax tree level, or abstract semantic graph level (cross references and such that are automatically updated if entities are moved/renamed).
IDEs sort-of work around this by parsing the code and trying to bolt on features, by handing the "dead" text files intelligently. But all this work is lost as soon as you close the editor, so it does not allow the programmer to retain changes at this level.
But I'd love to work on a project that examines different, new ways to represent source code. Which could aid static/dynamic code validation, documentation, code comprehension, refactoring, cross-cutting concerns, and would allow for rendering the source code in any style and syntax that the developer wants.
Of course, this also would present challenges in the area of scm systems, because those are really focused on 'dead' text files. One idea I've had is to represent code as a graph, for example, in a graph database.
I've been playing prototyping a completely unrelated project using graph databases, but that really gets me thinking... drop me an e-mail if you'd be interested on collaborating on something like this.
I think more having the modified AST feed back into the source code, or more accurately, the human-readable view of the actual source code. Like the comment example - imagine having the monkeypatch still in the comments module, but when you open the WallPost class, you can see the "has_many :comments" and a tag that takes you to the comments functionality.
Alternately, you can do the the reverse - someone writes the change in the more traditional manner, but you can view - and edit - the change-set as if it were a module of monkey-patches isolating the relevant concerns. Or any particular view of the program someone can think of that's useful.
I'd like to program like that.
ETA - Sorry for the accidental downvote; found a couple of your other comments to upvote.
Yes, like any modern high-level language it would use some concepts "borrowed" from LISP. Credit where credit due...
On the other hand I don't want to go completely bananas with the 'structureless' LISP. In my opinion at least it would aid comprehension to be mirror modern high-level languages. But you'll be able to choose Ruby or Python syntax-mode at will (or maybe even LISP-mode :-).
This is only true because our tools for reading and exploring source code suck. I mean, we still work at the "unit level" of code (i.e. files and functions, classes, what have you). If our development environments let us viscerally experience the relationships of these units (a systemic perspective) we'd be in much better shape.
I disagree. The dream of getting a higher level picture of code is pervasive. E.g. UML was born to do exactly that. However, our modules usually do not compose cleanly, so you cannot visualize them as connected black boxes.
One common followup argument is that functional programming is great for this, because pure functions can be treated as black boxes. Sorry, but the problem is on the algorithmic/business logic level, which cannot be solved by the choice of programming language/paradigm.
Example: Remove data from one database and put it into another one, transactionally. At any point in time any other process must see the data in exactly one of the two databases. You can not merge the databases into one.
The problem with your example is requirements are not that strict. You don't need to move data from DB1 -> DB2 within a single clock cycle. And if you did some systems would think they saw the same data in both systems due to latency and caching etc. Thus the compromise of using a hidden intermediary (or blocking DB access) is not only reasonable but necessarily.
For stuff like this the absolute worst case is you update one or more legacy systems. But, that's just part of the cost / benefit / risk analysis.
UML? I said 'viscerally experience' not 'be forced to gouge your eyes out'.
I joke of course. But wouldn't you find it weird if two hundred years from now we were still writing and reading code like we are today? If that were the case, I'd say we had done a shit job of developing development.
The only problem of monkey patching is that other modules might be affected. If module A monkey-patches module B, then module C might break. If we had a mechanism, which guarantees that C does not notice A's monkey-patching, it would be a super-useful decomposition method.
I disagree, as long as they have a consistent system of naming, you can find everything you need. Most of my problem comes from not being able to figure out how to moneypatch it instead of doing it the lazy and brute force way.
Point this article to any hot-shot developer... if they don't come away with a new smidgeon of self-doubt, slowly back away: he/she isn't worth working alongside.
To me it looks like the increasing popularity of AOP and richer (or at least more functional) syntax in languages like Ruby, the relative ubiquity of property chaining and two-way data binding across different frameworks, is moving towards a more semantic type of software specification where structures and relationships are more important.
I think that's pretty obviously a good thing since it reduces the cyclomatic complexity, coupling, amount of code, and increases reuse, although you can obviously take it a little far when you are actually using a mostly imperative paradigm.
I think this is one of the types of things that is going to (eventually someday) finally wake people up to the limitations of unstructured (although colorful) ASCII source editing.
I think this is one of the types of things that is going to (eventually someday) finally wake people up to the limitations of unstructured (although colorful) ASCII source editing.
The author is Reginald Braithwaite; the movie has a character named Braithwaite who begins the plot. I wonder if this whole post is autobiographical or if the Williams/Kelly is just an old joke and Braithwaite is a coincidence.
This nicely draws together several sub-currents of developer thought pertaining to the "am I good at this?" current that all developers have. I found myself bouncing back forth on some of the ideals, waiting to hear how it all turned out for this developer... Basically the same feeling I get from a good short story.
Have to give the author (and Williams!) credit - that's a damned neat developing style to my mind. As always, running into it when a client is yelling down the phone would be painful, but it is interesting to consider. Certainly it's a strange approach with virtues, unlike much of the beyond-spaghetti you sometimes run into.