Most code sucks because we have the fluency equivalent of 3 year olds trying to write a novel.
Let's get a little more specific...
Most code sucks because the programmer:
- didn't name his variables what the really were
- didn't understand variable state
- didn't understand variable scope
- didn't understand the basic concepts of iteration
- didn't understand any of the algorithms he needed
- wrote the same lines of code multiple times
- didn't know enough about the language to find a better construct
- didn't understand concurrency
- didn't understand fundamental data base concepts
- refused to follow well worn standards
- didn't have any well worn standards to follow
- built code on top of hopelessly designed data structures
- didn't understand deeply enough what was going on under the hood
- built to requirements that changed too dramatically to recover from
- didn't even consider the concerns of the "next" programmer
But most of all:
- was just good enough to get "something" (no matter how bad) deployed on time
I think you can check off everything on this list and still write inscrutable code. The worst code I've seen is code that was written so that only one person understands what's happening -- the author. Typically the code has too many branches and ridiculous call stacks. The writing analog is a run-on sentence.
This usually happens when the programmer at fault is really smart but still hasn't learned that at some point in the future, someone needs to come along and understand the code he/she wrote.
Titling people editors, the ones that are senior in an organization and might be titled as architects or senior engineers, now, would be incredible. With their current titles they are involved much more at the beginning of the process than at the end, when 'editor' implies the opposite.
It would be nice to have the senior folk guide the developer as opposed to handing off a blueprint and getting out of the way. (Not that its like that everywhere, just some places I've been)
This is the linux model. First, discussion then front-line coders create a patch, then lieutenants review the patch and submit to their superiors who review the patch and ultimately, all patches go through Linus before being committed to kernel. This way, the style and potential issues are enforced by the most senior people that are the stewards for the code base.
Linus has deputized a few people to commit without his review, I believe.. but this is after years of working with them.
If you don't work in the code you shouldn't get an opinion and giving one to someone like that is asking for trouble. At the end of the day, it will be a developer who feels the pain, not the architect. There really is no incentive to do a professional job if it doesn't make your life harder later (sometimes that isn't even true).
*professional means good quality balanced with maintenance and time available during development.
Just putting this out there, besides the undoubtedly negative sentiment it will bring. Try pairing. You don't have to do it all the time, but if you are working on something that you have to think about, bring another developer over. The best way to ensure what you are writing is good is to get another set of eyes on it. Preferable a set that will actually need to do something with the code at some point.
The best approach — with or without upvotes — would be to ask people if you want that information. Upvotes wouldn't reliably provide it — I and many others feel the best use of the voting system is to promote insightful and thought-provoking comments even if I don't believe they're ultimately THE right answer, and we vote accordingly.
I've taken the editor role in a couple of recent contract jobs. While the clients were both complimentary, I think I didn't do too well for one of them, where I came in to an existing codebase with a shortage of domain knowledge -- and this is going to be a general problem for a specialist editor, more so than with prose. I was more like a copyeditor. (For the other client I'd written the code they started with.)
Totally agree that more code reviews are needed. I guess then the question becomes, how do you teach someone to do a good code review. Code reviews come from experience I get it, so if you are a junior programmer then you get your code reviewed by someone senior. But there is also one very important part to a code review. It being able to give good advice to someone. Not to be over critical but to provide help that is actually going to build someone up. At a company I use to work at, they did this beautifully.
1) Every check in required a code review by a peer. In your commit statement, you specify who reviewed your code.
2) If a senior programmer wrote code for a feature that wasn't too complex he would get a code review from a junior programmer. The junior programmer doing the code review, learned what good code looks likes and second learned how to give feedback in a constructive manner as he was talking to someone with more experience than him/her.
No, traditional code reviewing is just spot checking. That is analogous to having an editor only look at a few passages of a novel. An editor needs to work with the whole text. This is the trouble with formal code reviews. Pair programming attempts to solve this by making all code get reviewed as it gets written. Like it or hate it, that at least tries to solve the core problem.
I suppose we're not "traditional" in this sense, but at Facebook we do somewhat more in-depth code reviews than just spot checking. Generally if you're modifying an existing system, you get one or two people who're familiar with that system (and thus know the big picture of code design, layout, etc) to review your diff.
If you're writing something totally new, you usually throw ideas and mocks (either UI or code-structure) until you decide on a solution that works.
I've found this to be pretty good at helping newer engineers write good (and idiomatic) code without hindering development speed much.
Code review might serve the same purpose (depending on how you do it) but I've had a hard time getting organizations to "buy in" to code reviews for various reasons (fear, pride, cost, etc.).
On the other hand having the Architect act as editor after the code is written not only provides the "code review" function, but it provides the additional benefit of providing the Architect with feedback on the original design, and how the abstract design ideas are translated during the construction phase.
I think it would improve communication on both sides (Architect and programmer) as well as overall quality...very nice...
It is not in the interest of the programmer to allow others to understand his code. That would make him easily replaceable. I know quite a few consultants who write like that on purpose and secure their jobs this way.
Programmers are also not rewarded for clean code. They are rewarded for quickly delivered code. Beautiful code that is delivered two weeks too late is usually a bad idea. This is business logic.
It is easy to measure, how long it takes a programmer to develop code. It is much harder to measure, how much time is wasted to maintain badly written code.
Much speaks against writing clean code, little is in favour. The only persons interested in this are later maintainers and academics.
Lets face it: whats wrong with code that is a bit messy but does its job correct and quick? If the author himself has to edit it and does not understand it anymore, then he/she will refactor. If there is no need to touch this code because it does its job, who cares? If the original author is already gone and there was no time to make someone else familiar with his code and style, think about how people are treated in your company and refactor management instead of code.
All of those are absolutely true, and many of them reduce to one thing
- didn't have a frickin' clue what the code was supposed to do
I remember reading the old cartoon (and it was already old when I saw it 25 years ago) of the manager saying "You guys start writing code, I'll go find out what they want", and swearing I would never be that kind of manager only to find, all those years later, that many of my programmers had already internalized the notion and routinely sat down to write code without having a clear idea of what the block of code was supposed to accomplish.
37 Signals in their book rework put it really well. You have a business model from day 1. Not everyone is Facebook and Twitter so having a concept of a business model is very important. So yes, having a clear vision of your product is necessary. That vision can change while you are building but you still need to have a starting point.
meh. i work at a small shop. most of the code i see that sucks was written by competent programmers under high schedule pressure. the code is clearly not well understood anymore, but if you trace the history you can see it's due to a few years of tacking on features as quickly as possible. the technical debt surely slows us down sometimes, but hey that was a business decision, maybe even a correct one! core business functionality is rock solid, its mostly UI code that sucks.
I wonder what the average startup codebase looks like?
I get tired of the "blame the shitty programmers" line of thought. We're all shitty programmers. Yet we all "understand variable scope" and "what the code was supposed to do." Some of us have such a large ego that we think it's others. No, Sancho. It's you.
Code gets complex as hell very quickly. If it was building a house, it would be built in a week and architected on the fly. The person mixing the concrete, who has to know the concrete grade and drying time, also has to know that the upstairs has just shifted and now the load is too much on the center beam.
The reality is that, unless your dealing with tiny apps, that "bad programmer"? It's you.
I agree that we are too slow to point the blame at ourselves. It's healthy to be self-critical.
That said, there are programmers who are legitimately lacking both in basic skills and desire to attain those skills. It's orthogonal to the problem you're talking about, but these people do exist, and they're not as uncommon as one might hope. I can teach the difference between pointers and references; I can't teach you to care.
This. I have ascended to a managerial position a couple of years ago, rising from the ranks to steer what used to by a group of my peers. A lot of cruft had been ailing the team for years, but one guy stood out: he had been fired, rehired and now acted as if there was nothing he could do to damage his standing in the team. When we were peers, it bothered me; when I became his boss, I really tried to sway his attention into the product, into learning, into becoming more than a "drag this out of the component box" programmer.
Needless to say, it failed. The guy was irreparably lazy, and trying to get him excited about building the things we are fortunate enough to get paid to build only made him try to put me under a bus when he got the chance. I had to let him go, and have slept better ever since.
good low level backend stuff takes years of iteration under static business requirements. its also a well-understood domain with many prior implementations to learn from, and the 'companies' tend to be product-based so you have much less customer pressure.
teams reacting to changing or poorly-understood requirements (think enterprise and defense software contracts) have a much harder task. nobody has solved the problem before, nobody understands the problem, and project managagement is typically forced into waterfall in order to make sales, so its not like there's time to redo things that get duct-taped together, if the duct-tape holds.
Even if you are working in a compiled language, you still can't always rely on the compiler.
The compiler will not stop a programmer from writing invalid code, without any tests, and checking it into a repository, where it silently waits for an unsuspecting programmer to discover it months later....
"How do I do what I want here? Aha, someone already wrote ExactlyWhatIWantHere.... WTF, it doesn't even compile? But how is anyone using it? Oh, it's not being used anywhere? How the heck did it get here? Ugh..."
In college, my CS 2 class was taught by a professor very different than the rest of the teachers there. He wanted all of your programming assignments on paper. Then, you'd get it back with every possible bug marked in your program, and he very rarely missed any.
That was the first time in college I thought "I really need to learn how to do that!"
I went to a state school, so maybe all you guys with your degrees from fancy schools wouldn't be that impressed, but the difference between this professor and the rest there was gigantic. (He retired that year in a round of state-wide budget cuts, so that's the last class I had him.)
 A lot of the other teachers wanted you to hand in binaries so they can see it run. Sometimes with the code, sometimes not.
 I actually don't know of any case where he did, but I'm not saying "never."
One thing I picked up (luckily) early on was that when you are learning something the fastest way to actually learn it was try to help other people learn the same thing. For programming this meant helping out on assignments in the labs, reviewing code etc. Very quickly you find yourself seeing a wide variety of bad practices and can recognize common mistakes and good practices. As a bonus you have to actually know the answers to the questions, but it is ok because if you don't know at the start you will soon enough.
tl;dr Pretending to be an expert until you actually are seems to be one of the fastest way to learn something.
this is very true. at my current company the last person who worked with the huge part of the code i write left about a month after i started. this left me constantly scrambling to figure out how to fix bugs that would come up in parts of the code i never even knew existed. I was managing ok but when we hired new developers, me having to teach them the little bit i knew caused me to understand the code way better than i had before.
I've decided my new method of training employees will have the most recently hired employee train the new hire.
I agree as well. I've noticed that in ALL problems, once I establish a solid consistent language around the feature I'm building, the programming part is easy. This is true even on complex problems. Its very rare to have a problem I can talk about in detail that is not easily solvable.
I've also noticed that peers struggling with implementing solutions usually have an inconsistent language.
In the class it was often referred to as psudocode, but unlike psudocode in examples in random books/specs, it had a real syntax and you had to put everything you would in a real language or you'd get points off.
It was a great way to learn programming (and teach it too).
*(To disambiguate for the left cost people, GT isn't like ITT, it's of quality closer to Caltech, but much cheaper. Best salary for the tuition school in the nation apparently today)
As in, you didn't get to execute code on a computer at all?? That seems kinda odd--I think a large part of the pleasure of programming, for most people, is seeing your code actually do something. I think Logo was captivating because made this aspect especially prominent.
Yup. The point was to make people not randomly throw guesses at the compiler like the monkies flinging poo experience many people have, but instead you had to learn how it worked, and think the right way to write the right thing.
First job out of school was Andersen Consulting. Went to their programming boot camp which was two or three weeks of 14-hour days learning their methodology and (sigh) COBOL. One of the final assignments was to write the code for a particular set of requirements. Then desk check it (i.e. on paper) until you were certain it was correct. Then you saved it and submitted the file. You got one chance, it had to compile cleanly and pass their suite of tests.
Most people passed; it was not a "gotcha" problem, you just had to be able to read carefully and pay attention to detail. Andersen hired a lot of liberal arts majors who had done a lot of writing, I was one of only a few in that group who had done a C.S. curriculum.
Do you mean handwritten, or just a printed hardcopy? If it was handwritten, cue loading up my handwriting as a font and typing it anyway. ;)
A couple of my programming instructors use a build system that compiles your code with three different compilers, and runs it with various tests and diffs the output with a master copy (or sometimes stress-testing and tracking memory/time/probe counts). One of them also ran valgrind on submissions. So you'd get your "accuracy score" on the same day or so as the programs were due, then the teacher/TAs would grade the source and dock points for bugs not found in the test, supreme ugliness, etc.
I transferred to an under funded state school from a well funded community college. The CC has computer labs that CS classes were taught in. The state school taught CS with chalk boards. I learned much more with chalk boards.
All true facts unfortunately. To get better teachers you have to attend a "teaching" college/university. This name is detrimental, as most students would probably be happier at such a place. The "research" university really isn't for everyone but they get the most prestige and people think, oh I should go there learn from the "brightest mind in the field".
That might be true of some people. For someone coming from a small town where everyone told him that he would go on to do amazing, wonderful things, it can be very depressing for him to learn that he is nothing special.
The opposite can be true too, I expected to get tanked at CMU and did pretty well (3.65 gpa, senior thesis, going to Facebook). If I'd gone to Ohio State like the majority of my peers from High School (Hilliard Darby High School has sent a whopping two people to top tier engineering schools (CMU and MIT) and none to Harvard) and even gotten a 4.0, I never would have believed I was more than a "talented state school kid."
At the end of the day, it's on you to put yourself in the most challenging situation you can possibly survive so that you can learn the most. Sheltering one's fragile ego never did much for anyone in the long run.
Generally true, although the Engineering department at my university (Waterloo) had a focus on making sure the teachers met a high "you must be this good at teaching bar" so the professors I had were routinely good.
Stanford's CS department hires lecturers (not professors) to teach many undergraduate classes, especially the introductory ones. Lecturers aren't expected to do research, and are hired based on their teaching ability. The ones I had were all incredible teachers.
When you've learned two subjects with some depth, you start seeing some analogies. Paul Graham thinks that hacking is very much like painting, and this guy thinks that writing code is like writing English, and at one point in my life I believed that coding in Perl has a lot in common with calculating cross sections.
That doesn't really mean that these things are any more similar than two random areas of human activity. Our brains are just good at finding analogies, finding analogies is like spotting patterns, we see them even when there aren't any and when there actually are some their appeal is difficult to overcome. And if you know two subjects well you have more material to cherrypick analogous things from.
> Japan is somewhat famous for churning out students who know a lot about English, but can't order a drink at Mac Donald's.
An old friend of mine had a master's degree in French literature. She used to live in Ottawa, Ontario and was fond of a pizza place just across the river in Hull, Quebec. One day she called to order a pizza, and when it came time to ask for it to be delivered she realized she didn't know how. She asked if the pizza place could put the pizza into a car and drive to her house with it.
The order taker replied, "Oui, delivery. Nous pouvons le faire."
I guess this is also illustrates the difference between studying French at university, and studying it at Alliance française--or studying computer science at university, versus programming at a trade/vocational school.
Edit: - I should point out that the French word for delivery is livraison; but like most languages, day-to-day Quebecois French is porous enough to borrow words from other languages where expedience warrants. Hence le week-end (not le fin-de-semaine), frencher (to French kiss), etc.
As a further digression, I should point out that it seems less common (i.e. less acceptable) to drop anglicisms into informal Quebec French than in Europe, where native French speakers do not feel threatened by a dominant continental Anglo culture. I was amused on a recent trip to France to learn that the French word for wifi is "wifi" (pronounced wee-fee). Since wifi is short for "wireless fidelity", a more canonical French term would be something like fidélité sans fil.
Just a side note, in France too we say week-end... Fin-de-semaine is usually used to mean at the end of the week (for example what people would say at work if they expect the answer to come before friday evening..)
AFAIK in Quebec the preferred term is "courriel", which is a portmanteau of "courrier électronique" in the same manner that "email" is a portmanteau of "electronic mail". (Funny that we're talking about Anglicisms in French, given the vast number of English words of French origin - like "portmanteau".)
As a beginning-programming instructor, my students have little difficulty understanding language syntax and operation - much like the "language lawyers" mentioned. Their recurring problem is not correct syntax, it's internalizing the process of expressing a solution to a problem using that syntax. They have few unto no examples of what a program - a real, functioning, well written program - looks like.
Indeed, programming is writing: it's expressing the solution to a problem using a formalized language. Understanding how the program works is one thing (and yes, all too often this is a problem); a good program also expresses why.
A 3 year old writing a novel won't succeed because, despite the ability to speak, he does not comprehend the process of concocting and presenting the story; all he knows is the minimum needed to just read one. He's not going to write good novels until he reads lots of good novels, and groks the literary structure and process of having and expressing an idea. Ditto programming: students may know the syntax, but they don't grasp the notion of "writing" because they haven't seen any decent examples.
Rosettacode.org may be a good starting place.
Thanks for the article, I may finally have identified the gap in my teaching. Now to find concise examples to fill it with, as the curriculum does not allocate copious time for reading lots of coding examples. Anyone have links to good examples for beginners regarding fundamental concepts? I don't mean syntax (I teach that well enough), I mean elegant uses akin to poetry or short stories. IOCCC (a moment of reverent silence for greatness ended) has wonderful examples of cool things in minimal code, but commands obfuscation. Anything akin to IOCCC for good readable nifty code?
How long have you been teaching programming, and how did you get into it? I'm very interesting in teaching beginner programming for a while now, as I'm completely unsatisfied with my "top 10 in CS" university's shallow approach. If you could drop me an email (in my profile), that'd be great.
Essentially, "computer science" is a myopic name for what is really "process studies." It just so happens that defining processes to run on computers is forwarding the field more than any other. But in the distant future we will all understand the field to be about abstractly defining processes i.e. a series of repeatable steps to accomplish a task. Which would be independent of the mechanism used to execute the process.
Therefore programming is the act of defining a process. And programming languages are what we use to define them.
It is covered fairly intuitively in the first few minutes of the SICP video lectures as well:
I'm a little suspect if this analogy goes beyond "theoretical understanding without practical experience is worthless". You could draw a similar analogy between architects who know everything about architecture but don't know how to design proper buildings because they have no experience. Or painter who knows everything about technique but can't paint a horse and so on.
It is easy to find differences between good writing and good code: Uniformity in code is good, while uniformity in prose is bad. In writing, a large vocabulary is an advantage, it's difficult to see if that is the case in language. In programming, proper abstractions are essential while I can't really see the equivalent of that in writing.
As far as analogies go, I can't see how this one is much better than others.
Comparing declarative prose in past tense and instructions doesn't work. "Rush to the exit doors, then leap down the stairs, then finally run to get the train." Humans need first/then/finally or else we might execute instructions concurrently ("Boil water. Dice an onion.") Although, these words are unnecessary if a the beginning a human is told, "follow these instructions in order," as a compiler is told to read files. "Finally" is the signal to stop executing after the last statement, compilers use EOF, close tags etc to achieve the same thing. So, the analogy is quite precise if imperative sentences and code are compared and the whole prose text is compared to the code plus the compiler.
I don't think this is missing from the article, it's actually what the author said. If you only learn the words and the grammar, without seeing how they are used to express things (and without trying to express things), you won't learn foreign language/programming.
Oh come on. Programming is not writing. When you're reading a novel you don't have to worry about concurrent access to a resource, threads, asynchronous events, or even much less conditional branching. With all due respect, I think this is just an inane post. That said, I can't blame the guy for dropping software development and moving to Japan to teach ESL. I often entertain similar fantasies all the time - programming is a great way to have the zest for life sucked straight out of you
Edit: Would someone have the courtesy to mention why they are downvoting?
And in programming you don't have to worry about capturing the audience, the development of the characters, the story arch, being clear but not being obvious, etc.
Both in programming and writing, the actual writing part is the easiest part. And while bad programmers can make a program work, and bad writers can tell a story, it takes good ones to do it coherently, efficiently and enjoyably.
It's not the exact same thing, which I don't think anyone is saying, but there are some similarities there, especially the aspect of writing readable code.
They're not nearly the same thing. The scope of modern literature has hardly moved in hundreds of years. Compare Updike, Roth, or Mitchell to Dante or Shakespeare -- they'd seem like near contemporaries in programming.
In contrast, in the past, programs to do Newtons method or compute trig tables were often the full scope of a program. You certainly never had a Halo 3, Windows 7, Google Search, or WordLens application written even 50 years ago.
Good literature isn't about automating increasingly sophisticated processes. Programming is all about that. This leads to increasing complexity of programs, where the goal of software engineering is to abstract/hide as much of the complexity as possible.
The only things they really share in common is that they're both written in text. If programs were written by soldering wires no one would make such odd acquisitions (do EE relate circuit design to writing literature?).
I think we give this slashdot author far too much credit.
They have one more thing in common; the text is read by other people. And the better written, the more effortless it is to understand.
I think this aspect of it is what the slashdot author is mostly talking about, suggesting that if you write code as if you were writing to the next programmer, rather than to just the computer, your code will be better for it.
Clearly, programming and writing isn't the same thing. Nobody is claiming that and, guaranteed, nobody here thinks that. It's a metaphor and like any other metaphor it breaks if you bend it enough.
Unfortunately, this metaphor breaks at the onset. Programs aren't primarily written to be read by people. They're primarily meant to enable functionality. Readability of code is important, but not the ultimate ends -- it is part of the means.
But the important part of my post was that programming is about automation. It's not about weaving a story, even for the next programmer. It's about building abstractions for automation. And yes, its important for other programmers to build on top or service it, but given a choice between the right user experience and dev experience, user experience should usually win (although there will be some exceptions).
The author argues that good code should be written for people. The metaphor doesn't break just because you disagree with the author's point.
IMHO, any dolt can produce code that only works. All too often I have to sift through horrible code that works, and I think the author is right in that if the programmer who wrote that (sometimes that's me, sometimes it's someone else) had had the next programmer in mind when he/she wrote it, it'd be much less of a pain.
The author argues that good code should be written for people.
If good code were written only for people there'd be no quicksort. No fast implementations of FFT. Probably no fast versions of memcpy. The reason is that coding is not primarily about writing for other people. Look at Donald Knuth's code. Extremely well-written, yet certainly doesn't stand on its own. And there are certainly design decisions that optimize for both asymptotic complexity and small constant factors.
My point is that analogizing this to books does no one any favors at all. We're having a conversation on the merits, w/o having to allude to neither haikus nor novels, nor gerands, nor foreshadowing.
Programming is difficult, even for code that "only works" (you'd be a billionaire if you could find a way to quickly produce code that "only works"). It has little more in common with writing than tarot cards -- which also are made to be read by humans.
My current theory is that programming is quite literally writing.
That's what the author wrote.
Lets take another (stupid) quote from the author:
Most code sucks because we have the fluency equivalent of 3 year olds trying to write a novel.
There's no absolute nature regarding the state of ability for any given task. There is no way to map fluency in English to fluency in any other domain. It just doesn't make sense.
This is just all around lazy thinking. It's buying into a metaphor because you either don't have the ability or desire to actual think about the real issues. edw earlier in this thread actually took a little time to think about the issue. This slashdotter made no effort, had no substance, no data, just a cheap metaphor that fell apart upon first glance.
or mistaking the rest of us for complete morons
How's this for lazy thinking -- I use duck typing.
I'd say its called creating an analogy where one shouldn't. As a general rule, and maybe I'm too influenced by Dijkstra -- don't introduce metaphors/analogies where you can reasonably talk w/o one. In this case, no clarity was introduced by its addition.
It's like adding gravy to mashed potatoes. Sure it may make the potatoes taste better, but now you can't put the gravy back in the gravy bowl, can you?
In programming, do you have to worry about phonology, slang, or number agreement?
The guy's deeper point is that programming is a language acquisition task and we would do well to take the rich set of lessons from ESL and foreign language learning and apply them to CS learning. Will all of it stick? No. But there is fertile green field to plow here.
> programming is a great way to have the zest for life sucked straight out of you
Sounds like you're in the wrong field. Just because you find the thing a drudge doesn't mean others feel the same way, and personally I find the cynical 'well real-world programming is ultimately crap' specious and poisonous - to you it is, to me it is not. Why are you still doing it? And why are you stating it like it's some immutable fact we are all avoiding somehow?
Personally, I find programming a wonderful, amazing thing even when working on the most incredibly dreary software, and of course considerably more so when working on the more interesting stuff.
This kind of stuff is unfortunately common and applicable any + all professions + activities out there. If everybody listened to the nay-sayers, nobody would have tried doing anything.
> programming is a great way to have the zest for life sucked straight out of you
If that's what you really think then maybe you should go become an ESL teacher in Japan, or at least try a new position that may fit you better. I find good programming roles to be highly enjoyable, challenging, and rewarding jobs. Sounds like you might need a change - shake things up a bit, re-examine assumptions and that sort of thing. It's very bad in the long run to hate your job.
> how come not all great programmers are great writers?
I imagine most programmers are great writers from a structural point of view. The rest is emotion. Programmers know how to appeal to machines, but often are not able to connect with people in the same way; something that extends beyond writing, if stereotypes are any indication.
It's a nice theory but I disagree. Programming languages are really just a poor effort towards a mathematical notation.
There are those adept in theory that can't practice well, but that's because to practice the art means dealing with a plethora of hacks, exceptions and miscellaneous trivia completely unrelated to the core problem at hand.
Personally, I don't think the answer is becoming 'fluent' in handling the exceptions and idiosyncrasies of the chosen language, framework and platform. Rather we need to work on building a notation that provides better abstraction mechanisms.
How is mathematical notation different from a written language like English? It conveys meaning using glyphs, uses nouns and verbs (and even adjectives). It expresses all sorts of ideas. One might even say that math's notation is a poor effort towards English.
Language can be expressive in ambiguous ways, with multiple valid ways to express the same emotion. Notation should be definable, unambiguous, and precise. The fewer ways to express something, the better.
> The fewer ways to express something, the better.
Your last sentence could be rewritten "It is better to express things in as few ways as possible", or "Having fewer ways to express something is better". English, however, allows you to omit the main verb for the causation in a parallel construction, you were able to write it shorter, and with a 3-syllable climax after a pause at the end for effect.
Shouldn't programming languages also provide many different ways to express things?
No, because that makes it harder to understand what it's doing. Remember that programming languages are not learned at a very young age and continuously trained for decades the way natural languages are.
No, because that makes it harder to understand what it's doing.
Without looking at an example of working code, how is it possible to say that one particular phrasing is so obviously better than any other that it and it alone should be allowed to exist?
I appreciate in Perl very much the postfix conditional expression syntax because it exploits the end weight linguistic property and allows me to emphasize the most important part of the statement when the situation warrants. I believe that makes my code clearer because it looks different from the Algol-standard conditional syntax.
> Perl ... allows me to emphasize the most important part of the statement when the situation warrants.
Yes. The ordering of elements in a sentence is its "thematic" structure, just as important in communicating as the "transitive" structure of sentences, i.e the relationship between nouns, verbs, etc. Most programming languages copy the transitive structure of natural languages, but not many allow the programmer to freely choose the thematic structure.
The problem is that the language becomes virtually unknowable. Perl has a ridiculous amount of syntax. I am sure that I could find half a dozen lines of Perl code that the average Perl developer would not understand because everyone only knows part of the language.
Who is the average Perl developer? I'm a student. I should probably know less about Perl than a junior Perl programmer - and I know almost all the keywords.
Is simple.wikipedia.org really better than wikipedia.org for the average English speaker? If your only argument is that simpler language lets more people understand, why bother saying words like "unknowable"? Why not say "not knowable"? Isn't it superfluous to have these words in your vocabulary?
Being able to accurately express an idea quickly has it's value.
Do you know all of the operators as well? All of the special variables? A significant portion of the regex syntax? The four different ways to call subroutines and the effects that each method has? The list goes on and on and on.
I work with about a dozen Perl developers and there are only a couple of us that know a significant portion of the language. Problems occur when someone uses a less common piece of syntax and no one else knows what it does. Everyone has their favorite ways to do things and people tend to have very strong preferences. This makes for code that is much more difficult to understand.
English is a far more complex topic. I am not sure where to even begin. I do think that it helps for a group of people to share a common set of language prescriptions.
You could have been easier to understand if you simply said "nope, you're wrong". But instead, you went on for three paragraphs describing your point of view. You even used a word I had forgotten the definition of: "operators" (Yes, I had to look up the exact definition). Yet I understood everything you said with no ambiguousness.
Was this incorrect? Was explaining your point of view inefficient or causing problems in understanding? I think you inadvertently showed how expressive language has enormous benefits, even if people can never understand it 100%.
Having more ways a statement can be interpreted does not make it easier to identify the correct interpretation, so I don't think this is really a reason people have a harder time understanding code or mathematical notation than they do their native language.
And mathematical notation offers plenty of ways to express a given statement (transform with De Morgan's law, contraposition, shuffling and negating quantifiers, etc.).
It is not the superficial appearance, it is the way it is used. Mathematics specifies a particular, limited, well-defined set of actions and relations. It is that character of those 'game rules' that make it mathematical, compared to general language.
Programming isn't about writing. It's about logic. It's a series of instructions for a computer to follow.
The old 'write down how to tie shoes' or 'write down how to make a peanut butter and jelly sandwich' projects show just how hard it can be to pin down exactly what needs to be done to do seemingly simple tasks.
The writing is merely how to you transmit the logic.
I disagree so hard. Computer programs should be written primarily for humans to read, and only incidentally to be executed by computers. The connection between writing and programming is vital: they are both the practice of expressing structured ideas. It has nothing to do with the computer. It has nothing to do with the English language, or Vim, or Word. It's about starting with an idea and hacking away at it, removing ambiguities and possibilities, until only something unique remains.
I disagree so hard. Computer programs should be written primarily for humans to read, and only incidentally to be executed by computers.
The funny this is that the same is also true for
Mathematicians writing proofs and theorems are communicating with other people, not (usually) with machines.
Mathematical notation is s means of human-to-human communication. Yet anything but the most trivial math soon becomes inscrutable to the untrained. As far as I can tell, mathematicians are mostly happy with this. The language used allows for precise, compact expression of complex (i.e structured) ideas. That understanding requires training and rigor is not considered inherently bad.
For whatever reasons the same terseness and concision is derided by many programmers who seem to believe that ease of understanding by the moderately skilled is highly important.
Unless you're designing your own ICs and then programming them in machine code, there's some amount of logic and knowledge built-in already. I don't have to explain to the computer how to add two numbers together, or how to take a square root, or sort a list. It is necessary to understand the vocabulary that the computer already has. That's what OP meant by learning idioms: I can either painstakingly describe how to make a sandwich, or I can just say "make a PB&J" and you know what I mean.
I think the point that OP is trying to make is about skill acquisition. He argues you can't learn to program well merely by learning programming concepts. Rather, you must immerse yourself in reading and writing real programs.
And is not (good) writing also about logic - even though sometimes it's fuzzy or the (twisted) logic of human psychology?
Writing requires "logicking out" the likely responses of the reader - and the pseudologic of parsing by the human brain (see Dan Ariely's "Predictably Irrational: The Hidden Forces that Shape Our Decisions").
His theory might be true if you were only talking about writing a program from scratch. But programming is a lot more than that, because except for the smallest, most trivial exercises, programming is a team sport. Which means you need to be able to understand somebody else's code, and you also need to be able to modify it in a way that makes sense.
You can have a program which is beautifully structured, and factored in an extremely clean way. But what happens when you have to modify it in response to a change in requirements? And what if you have external code that depends on the existing interfaces? At that point, the skills needed to be a good programmer are quite different from that of a writer.
I can think of a number of novels where it's obvious some scenes got moved around for plot reasons, and their were continuity errors or other ways in which the seams showed.
But novelists only have to deal with that for maybe a year or so; programmers have to deal with this problem for potentially decades, and at that point it's a completely different problem just because the scale is different.
I'm working my way through Stephen Donaldson's latest Thomas Convenant novel (book 9), a series he started writing in the 70's (first book published in '77). He expects to have the final book (book 10) completed by Fall 2013
Programming definitely requires writing skills (expressiveness and concision most importantly)
However it's about structurally combining that writing into maintainable code, using idioms of the past to build upon to gain error free use. It's about mentally modeling what parts of the machine are doing, and correctly understanding those interactions. It's about risk assessment, experimentation, knowing when to call something quits. It's about going back over old world to do it a better way. It's about decomposing a complex process into many simpler steps. It's about reading, reading lots in fact, usually reading to find out details of the bits of your mental models.
Programming really isn't anything other than programming. It's not any field like it or near to it that knowing a nearby field plays well into it (electrical engineers can make horrible programmers but brilliant electronics guys same goes for web designers), but lots of other skills parlay well into getting you partway into the panoply of skills that get you past the finish line.
People make bad programs code when they only have part of those skills (or when they do not put out the effort to use the skills they have, due to time or willpower restraints).
Another reason some programs suck: they picked the wrong problem to code.
"Besides a mathematical inclination, an exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer." -- Edsger Dijkstra
As a programmer coming from a linguistics background, this sentiment has always resonated with me. As much as math is a part of my job, recognizing the transferrable skills from human language use has probably been the primary reason that I've been able to land programming jobs and write fairly successful code.
To have good taste is one thing (the ability to recognize good from bad) but to be able to produce good work is another. As many good writers (and the author of this post) suggest, the best way to get better at writing is to read good writing and to write more. This method (of observing masters at work and practicing the craft yourself) works for many disciplines. Should it be surprising that it applies to hacking as well?
Sadly (and tellingly) it's not necessarily the most successful code (success comes from meeting user needs, not from good code), but how else can you judge it, apart from reading it. This is made worse because it's hard to tell if code is necessarily complex without understanding the problem it solves, which may be complex. Further, if you don't yet know what good code is, how can you recognize it? Of course, if you are intelligent and reflective and try ideas out, you can learn from both code and bad - it's all raw material, grist for the mill.
IMHO the hard part of programming is understanding the problem to be solved. The solving is easy.
Saying programming is writing is like saying building something is 'speaking to materials'.
When you look at software, what you see is not a language, it is a machine. A machine that is presented in humanly understandable and communicable form, but nonetheless, something with a particular kind of underlying determinate structure.
Yes, the main part of the conclusion is the same: you have to learn by doing. But because software is design -- rational manipulation of objective 'material' -- knowledge about how it works is also an intrinsic part of doing it well.
Design means creating something through clear knowledge: we go with particular design ideas because we can predict their outcome and effect. This is the knowledge about that the article underestimates. We choose quicksort not simply because we have immersed ourselves in social norms, but significantly because it has been proven asymptotically fastest.
That kind of determinate knowledge that programming by nature can have is important. It is missing something to lump it in as just being like spoken language learning.
I agree with him. As kylematthews points out, it's not about the substance of the thing--capturing an audience, I/O calls, or anything like that. It's about communication.
Communication is about translating something conceptual inside your own head to become understandable by something (usually a person) outside of it. The way we do this is by language. A programming language is merely the way we communicate with a computer. To do it right, you need to understand the parts of the computer that you want to change.
It's not a mistake that we used to call impressive coding "wizardry" or "deep magic". It was about casting spells... except that if you actually step to the side and look at what "spell" means, it's simply a story.
A good program tells a good story. You might not appreciate the characters or the plot, but the computer sure does.
I don't think that I can agree with the op on slashdot.
Programming is not about knowing lots of words or grammar. It is about getting the thoughts clearly structured and understanding quickly what a problem is really about.
THIS is quite the same problem as writing text. But, IMHO, the author approached this as someone who does not know how to write text, too. Its not about ingenuous choice of words and correct application of grammar. It is about clarity of understanding the problem and then find a solution how to get to the point clearly and quickly.
If a solution is in the mind, 70% of the way is gone. Then it is important not to give up unless this idea is on paper in the same shape as it was in the head before.
So I think the analogy is a good one, but the op has not necessarily found the reasons why.
I spent two years living in a foreign country learning a new language (my first foreign language) immediately before I started learning to program and it's always struck me how similar the two experiences are.
All good communication comes first from good thinking. Whether or not we can communicate in a certain medium (speaking a language, writing, painting, music, programming) depends entirely on how fluently we can translate our thoughts into that medium.
So good thinking + good understanding of programming concepts = good code.
This makes me think of people who would try to "memorize" for e.g. science tests. They were ok as long as they could plug and chug, but they couldn't derive anything on the spot.
I'm wary of tickling my ego, but I recall not entirely seldom forgetting a formula, and so simply starting with other stuff and deriving it in the margin or on scratch paper. I think many of the instructors liked that, as well, as it showed a more conceptual grasp.
This discussion brings to mind Robert Lefkowitz's Pycon keynote from 2007 on Programming Literacy, very similar to this video from Stanford: http://www.youtube.com/watch?v=Own-89vxYF8. He is definitely an advocate of literate programming and basically says we will all be able to read programs when they can be written in English and not in code.
I like the author's premise. It's hard to find good writing that hasn't been rewritten.. a lot. That seems true to me about code as well. Perhaps we're seeing a lot of "rough draft" code being pushed to production. Novels never get published like that (hopefully!).
The only problem I see with this is when Bob, David, Mike and Steve have their own ideas about writing a story and take your love story then turn it into a science fiction novel about aliens, cowboys, race cars and heavy metal.
No, its a kind of data. It has more in common with a spreadsheet than a novel. And like a spreadsheet you can get yourself into lots of trouble if you don't format the data in a way that suits the tool you are using.
Way to take this conversational thread out of the analogy! Its an analogy that is instructional but clearly not applicable for all writing. I hope to never see code that looks like a Faulkner novel, for example.
I've read the article. I am sure that when a new child is born he has, say, a 5% chance to have an ability to be a programmer. After the lucky one grown up he will get all required information and experience just because he has the ability and finally, say in 10 years, will be a good programmer. Other 95% may do everything they want, they can learn how to program, read a lot of books, do all their homework and hobby project, but they will grow up only to the certain level.
I want to believe in what I've just wrote cause that guarantee that I will not have a lot of competitors and I'll always have a job when I want
Programming is a compressed form of serial semantic thinking. It requires the same brain structures as math and language, so it's similar to writing, except it's more specialized and more restrictive. It's also very effective
For more, let's start with source code. Suppose we have source code line
a = b*c
Reading this line, we want to know what it does and check that it's correct. So, we need to know what the line 'means'.
But, we conclude that
a = b*c
doesn't really mean anything.
Of course if we saw
F = m*a
we might guess that the variable names were mnemonic and guess Newton's second law that force equals mass times acceleration. Okay, now we know what the line means and can check if it's correct.
Okay, we are beginning to see:
A line of code such as
a = b*c
doesn't mean anything. So, we have nothing to read and no way to check. So, we don't have anything.
We could write
F = m*a
and begin to guess what this means. But we are still in trouble: We still have no good way to communicate meaning to permit understanding or checking.
So, we have to ask,
F = m*a
came from physics books, and what did those books do? Well, they wrote in a natural language, say, English. Always, an equation such as
F = m*a
was just an abbreviation of what was said in English. And, in particular, from the English there was no question about the meaning of each of the variables.
Net, math, and science with math, are written in complete sentences in a natural language. The variables are all clearly defined, discussed, explained, etc. At no time is an algebraic expression of such variables regarded as a substitute for the natural language. Take a physics book, throwout the English and leave just the equations, and will have nothing.
Physics and math understand; so far computing does not.
So computing tries to write
force = mass*acceleration
or some such and omit the English. For simple things, can get by this way. Otherwise, this approach is hopeless, at best presents the reader a puzzle problem of guessing.
The matter of using mnemonic variable names as parts of speech in English is a grand mistake but common in writing in computer science. Bummer.
Bluntly computing has not figured out that there is so far just one way to communicate meaning: Use complete sentences in a natural language. Period. That's all we've got. But computing has fooled itself into believing that algebraic expressions with mnemonic variable names form a 'new language' that, in computer source code, can provide the needed meaning without a natural language. Wrong.
F = m*a
the situation is simple. But significant source code has much more complicated cases of 'meaning' to communicate. Again, computing tries to get by, say, using a big library of software classes, relying the mnemonic spelling of the classes and members and the documentation of the classes. In simple cases, can get by this way. But fundamentally, for some complicated code, the meaning, workings, etc. just must be explained, and there's only one way to do this: Complete sentences.
So, writing these complete sentences to communicate meaning effectively is 'writing'.