Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Hopefully more controversial programming opinions (dadgum.com)
103 points by elssar on Aug 30, 2012 | hide | past | favorite | 98 comments


[update: I've got hold of some of the code. Here's a copy of one of the programmes http://pastie.org/4615158]

I have an awesome story that i think I've told before, but I'll tell it again anyway :)

My father-in-law is a pharmasist. back in the early 90s computers were starting to make their way into Pharmacy and the first few pharmacy sortware packages appeared. this increased with time, as you'd expect. At some point he decided it was inevitible but he decided (I have no idea why, except that he's always been quite independent) he'd write his own software. Took himself off to nightschool and learned dBase III. He'd never written a line of code prior to this.

He then proceeded to write an entire pharmacy suite of software tools that are still in use. They've been heavily updated and added to over the last 20 years but it's still all DBase III running on DOS. It's a saggering amount of code over hundreds of files but it all works. I've seen the code and it wouldn't pass a first semester coding course. each programme is a single 10,000 line spaghetti-fest. I've tried to explain functions/methods to him but he didn't understand and in the end I gave up. apparently he doesn't need them!

I can't understate how complex this software is - it's stock control, point of sale (including the usual things like birthday vouchers, discounts accounts etc etv), prescription dispensing, patient records etc. everything for a modern pharmacy. The australian health dept know about it and it's met all their criteria. it does everything.

It's mindbogglingly awfull bad code by any metric imaginable but yet it's robust, appears to be as-good-as bug free and has been able to be maintained (gov regulations for pharmacy change constantly so it's required major changes each year to keep up to date) and in production use for almost 20 years. oh, and it's fast :)

[Edit: looking at the other comments it seems I've replied to the wrong thing. I was actually writing a comment in response to this http://prog21.dadgum.com/87.html (Write Code Like You Just Learned How to Program) which was linked to from the main article]


This reminds me of a manager of mine who wrote a 10,000 line Perl script.

He thought himself how to write Perl in a week. And then started coding. He wrote a script to automate a very tedious testing process so perfectly, we were able to reduce our test team by like 70% and test more precisely than before.

It was fully procedural code. And it looked monstrous. Our architect challenged him that it won't workout. He even challenged to rewrite it in Java, but that project never finished. He was never able to ever complete the rewrite.

It motivated me to learn Perl. Since then my life has been totally different. I've done mountains of work alone working, hacking during nights.

I learned the following things from that episode:

1. First write the program.

2. Write the correct program.

3. Write the program to run fast and efficient(scalability and all that)

4. Beautify the program.

But most people never cross 2)- The point is there are tons of people who can do this tasks related to micro optimization. The people who win in a ordinary work environment are ones who know the art of converting ideas to sell able products in the fastest way possible.

By the way that manager also wrote database clients in C++ which helped us troubleshoot our in memory databases from remote locations. Again the architect challenged to get it done Java, as before the project never completed :)

Its a fact that majority of the software world is hacked and held in existence with tools like Perl and Php. The people who care about artistic elegance are few, they often fail and generally don't matter.


The architect sounds obsessed with Java... first sign of craziness!

Disclaimer: I like C/C++/Python.


Actually its all the 'design' craze. Most architects think drawing UML diagrams on mspaint to be a sign of technical superiority.

The real issue I think is heavy OO programmers(read java) can't live without.

1. Design patterns- The art of bloating already heavily bloated code.

2. Avoiding meta programming of any kind.

3. Love towards getters/settters and absolutely anything that lends to code bureaucracy.

Why get/set variable directly when you can write tens of classes of methods to set them?

4. Avoid learning the command line. And write bad implementations of sub functionalities of tools like awk/sed to achieve the same tasks.

5. Lengthen anything and everything as much as you can. thisIncludesTheVariableNames, Class names, methods names, package hierarchies, class inheritance. You name it they can pointlessly lengthen it.

6. If you write code in file at least 40% of the lines MUST be try/catch statements. If you can't be satisfied with that write your won exception classes and invoke point 5.

7. Always write code which can't be read or figured out unless you have auto complete, intellisense and other IDE goodies.

8. Use XML as much as you can so that you can dogfood your insanity even more.

9. Make the code so verbose that even simple programs sound like very complicated ones that can be written only by you.

and many more...

If you don't follow these points Java programmers/architects think you are writing bad code.


Ugh... don't blame "design patterns". They are just tools, and any good developer utilizes them nearly constantly. Are we really going to suggest that you can write good imperative code that doesn't involve delegation, strategies, or factories at some level?

The issue is with a culture that thrives on complicating relatively simple concepts by abusing those tools, not with the tools themselves.

Design patterns don't make for bad code, bad coders do.


I'm sorry, but your comment is really not much more than pointless, cynical Java bashing. Java has it's share of problems, and it's definitely not as modern (regarding language features) as some of the other languages today (although that seems to be changing too), but if you're going to attack it, then at least do it for it's bad characteristics, not because you personally dislike/don't understand certain features, or principles of OO development.

Design patterns- The art of bloating already heavily bloated code.

Design patterns are to OO code what salt is to food. Put just the right amount, and you get a tasty meal. Put too much, and you get an inedible pile of crap. Design patterns are not the problem, they are a solution. Bad developers who don't understand when and how to use them are the problem.

Love towards getters/settters and absolutely anything that lends to code bureaucracy.

It's true that Java is more bureaucratic than most languages, but there is a reason why getters and setters are the standard. And the reason is encapsulation. I can expand on this further if you want me to. For what it's worth, I think that this is one of the things that Scala got right [1].

Lengthen anything and everything as much as you can. thisIncludesTheVariableNames, Class names, methods names, package hierarchies, class inheritance. You name it they can pointlessly lengthen it.

CnfNfEx. ConfigurationFileNotFoundException.Which one of these two names do you think describes what the class does better?

If you write code in file at least 40% of the lines MUST be try/catch statements. If you can't be satisfied with that write your won exception classes and invoke point 5.

I don't know which APIs you worked with, but those that I use don't force me to put 40% of my code into try-catch statements.

Always write code which can't be read or figured out unless you have auto complete, intellisense and other IDE goodies.

I can see how autocomplete would make it easier to write code, but to read it? Nope, Java code is perfectly readable in a plain old editor, actually. But it's definitely not as easy to write using a plain old editor, to be perfectly honest.

Use XML as much as you can so that you can dogfood your insanity even more.

I agree with this one. XML madness needs to stop. Fortunately, it seems to be stopping already. For example, most modern DI frameworks don't require you to write your configuration in an XML file.

Make the code so verbose that even simple programs sound like very complicated ones that can be written only by you.

Verbosity is an unfortunate trait of Java. But nobody writes code so that it would seem that only they can write it. If they do, that's called over-engineering and is, believe it or not, considered to be a bad practice.

[1] http://www.dustinmartin.net/2009/10/getters-and-setters-in-s...


I read the grandparent as criticizing Java programmers. A sufficient number of bad programmers seem to fall in these trap often enough that the result is a mess.

Design pattern: I wouldn't be surprised if they are used too much.

Love towards getters/setters. Not the fault of the language, that's just bad practice. And no, you don't have encapsulation. Rarely, getters and setters do some verifications or filtering, but most of the time they don't, in which case you just have raw access to the object's mutable state. Be honest and make your variable public, that's less code. If you ever need to put checks or filters (almost never), then just refactor.

Long names: my jury is out for this one. In your example, I'd say the overly long name comes from an overly narrow functionality. For something that specific, you should use a description string.

Try/Catch: If your QA section tells you to check all exceptions, you may have quite a bit of those statements. (Also, maybe he included the body of the try/catch statements?)

Code that requires an advanced IDE to approach: a good IDE let you manage and tolerate higher levels of complexity. You'll also be less encouraged to simplify your code. You may not even notice when it becomes too complex for a humble vi user.

Verbose code: People tend to stop as soon as it works, without simplify their code further. It's not the language's fault, nor is it done on purpose. I hate when I see code like that. I often have to apply various correctness preserving transformations before I stand a chance at understanding it.


"In your example, I'd say the overly long name comes from an overly narrow functionality. For something that specific, you should use a description string." - What use is a description string when you're staring at a code listing, trying to figure out what it does?

"a good IDE let you manage and tolerate higher levels of complexity." - Which is a good thing.

"You'll also be less encouraged to simplify your code." - Simplifying code is a discipline that can just as easily be avoided by people writing in Notepad. Simpler code comes with experience, not through tool abstinence.

"You may not even notice when it becomes too complex for a humble vi user." - If your tools don't get the job done, switch tools. If you find yourself saying "My tools would work fine if only OTHER PEOPLE would ...", you're using the wrong tools.


1) This error looks like it can't be recovered from. Just throw UnrecoverableError("Config file not found"), and you're done. The code is just as readable. Now if you also need to easily catch it, you need better pattern matching than Java and C++ can give you (Ocaml works well). Again, the code will readily display the relevant string. With C++ and Java, okay, drop the string. But try to limit the scope of your exception to the module it belongs to, and use context to give it a shorter name.

2) I agree

3) Thinking about it… you're probably right.

4) I agree

My point was, IDEs are double edged. Like debuggers. Without such fancy tools, you are forced to think before you write nonsense to the compiler. Good programmers will do the thinking anyway, but many others need some "encouragement". Optimizing for good programmers is a good heuristic, but sometimes, you also need to prevent the bad ones to make too many mistakes.

A similar argument can be made about functional languages: I can avoid side effects in C++ just fine, and I mostly do because its plain simpler most of the time. But many programmers need at least an Ocaml straight jacket.


OK, I thought you were taking exception(har har) to the name length, not to the fact that such an exception subclass was being used rather than simply filling in a description.

However, even then I think it's not quite so simple. In most cases simply putting in a text description for the exception would be sufficient, given language support when catching should it be needed. But if the application at a higher level needs to know what went wrong in order to inform the user in a friendly way, you'd need some kind of mapping between the exception and what to display on the screen. And if your app is internationalized, simply displaying the description of the exception would be even more problematic, especially if you have the same exception being thrown from different places (in which case you need to make sure none of their descriptions differ by even a single character lest you destroy the i18n mapping). I couldn't say definitely that subclassed exceptions would be superior in this case, but one could make an argument in favor.


Regarding 1)... Yes, in this particular case you could just throw an UnrecoverableError (or whatever) and pass a string with a message of what went wrong to its constructor. But that wasn't my point at all. I wasn't trying to make a point about how to structure code or deal with errors/exceptions, I was trying to make a point about naming conventions. My example could just as easily have been something like:

"PtgSL. PatagonianSeaLion. Which of these names describes the class better?"


The long name, of course.

Now whatever you do, long names should be reserved to infrequently used things. If a long name litters your program, you'd better use a shorter one. Conversely, short names belong to frequently used things. This is because a long name is a cognitive burden. On the other hand, one needs to learn what is behind each label, and don't want to forget it.

Fortunately, good long names (such as "PatagonianSeaLion") tend to be about very specific things. So specific that you rarely need them. But there can be bad long names, like "System_Out_Println". Something this pervasive should never have such a long name.


It's true that Java is more bureaucratic than most languages, but there is a reason why getters and setters are the standard. And the reason is encapsulation

Zen tip: When there is a reason for a bad thing, that is called a bad reason.


So much this

100 lines of python can be turned into a hundred men-hours / 6-digit budget project in Java


So, the language choice of a piece of software determines the amount of time and money that is required to build the system?


Of course it does. Why is this surprising?

Try building a (safe) web app in C and see how long it takes.


The bit about building a web app in C is totally off-topic from your claim that 100 lines of python would equal 100 hrs of Java. If you can only write good software in a short amount of time in one or two languages, that's just a reflection of you as a developer and not the language, and I'd seriously challenge you to show me something you've written in 100 lines of python that I couldn't hire a single senior Java developer to write in the same amount of time (lines of code doesn't equal cost, so I'm not interested in debating lines of code nor do I think it's a valid measure of anything).

I'll give you a short story that approaches this issue from both sides. A client walks into a consulting shop with a piece of really bad client/server software written by a different consulting shop. The server is written in Ruby/Rails and the client in Java (Android). Both client and server code is horrible even though the server is written in Ruby (a supposedly beautiful/compact/expressive language) and the client in Java (super ugly long winded grandpa language, or whatever). The client paid the original consultants about $60k in total. One Ruby engineer and one Java engineer rewrote the entire thing in a few days for 1/20 the cost without really reusing an ounce of the original code. Ruby didn't stop the original server engineer from delivering horseshit in too much time, and Java didn't stop the second client engineer from delivering a clean/functional/performant app in just a few days.


Your second paragraph is what I meant

But I've never seen a "software consultancy company" do a project except in Java

Ruby and Python can be beautiful yes, but in the hands of average Java programmers it just becomes a horrid mess


So where you live has no bearing on crime levels?


So, you're suggesting the following analogy: The crime level of a region relates to the region itself the same way that the quality of software relates to the language the software is built on? You're comparing a property of an object to the object itself and a property of an object to another property. Pretty awesome you'd make that mistake in a thread where you're agreeing with someone's criticism of OO principles.


Isn't your list a variation on the well known list of:

- Make it work

- Make it right

- Make it fast

http://c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast


Depends on whether step 2 is a complete rewrite (in a different language) or further hacking to "make it right".

My process tends to be:

1. Make it work - Knock up a prototype (lots of assumptions made and noted, limited error checking, etc)

2. Make it right - By starting from scratch and use what was learned in making the prototype

3. Make it fast(er) - Having a prototype, and trying it at scale, should have already given me an idea of what algorithms/structures to use/avoid in Stage #2 but, obviously, a further optimisation stage will almost always be beneficial.

Ideally (if time allows) the prototype is written in a completely different language to the desired target. This is the most important point.

Most of the production code I write is C and I tend to do my prototypes in Perl. The helps me for several reasons:

a) ADTs (lists, hashes/dictionaries, etc) are just easier/quicker in Perl/python/ruby than in C (I do have my own C library that I reuse for ADTs so it's not as if it's a great chore writing them each time, but ADT instantiation is just easier in Perl/python/ruby than C).

b) It's also easier to play around with different algorithms if you can switch between ADTs more easily.

c) I can use the prototype stage to continue to learn new languages (it used to just be Perl but has extended to include languages like Python and ruby). This list will keep on growing.

d) It prevents me (or the business) stopping at Stage #1 and using the prototype code for production. (This could be even worse if the prototype was written in C.)

e) A prototype, being quicker to knock up, means faster to getting it in-front of someone else to look at and give feedback.

f) It prevents me just copy-and-pasting chunks of the prototype code as production code so I, hopefully, don't carry over any of the assumptions I'd made in creating the prototype.

g) "Knock up a prototype" doesn't mean no design/testing/etc. I usually end up with design notes and some automated black box testing that can be reused in the later stages.


Because understanding exactly what program should do is more important than understanding what are best practices in CS, or how to abstract things.

If you can keep all interactions of your app in your head, then it doesn't matter how badly designed the code is. Architecture is only important because it makes it possible to change things without understanding the whole thing.

And complete understanding with bad architecture beats good architecture with incomplete understanding.


I've never seen anything labelled a "best practise" that has had anything to do with what I would call CS.


>Architecture is only important because it makes it possible to change things without understanding the whole thing.

And for that you need to understand the architecture, ofcourse.

You always need to understand something_(n-1) to understand something_n, ad infinitum


It's a fractal, yes, but you can do with only understanding your branch in the highest resolution, you don't need all the details of the whole system (if it's well designed).


I wonder how much that initial version changed during the 20 years of maintenance.

I find it hard to believe that someone was maintaining it for 20 years without improving the quality (of the code/design) and making it easier to maintain. Have you seen the code base of the initial version or of the version that is currently being used?


I've know my wife for the last 18 or 19 years and it was written before that, so I haven't seen the original. I have seen him debugging code in the evenings though. This consisted of printing out the entire programme (in the DBase sense a "programme" is a single file, but a programme in our sense would be a collection of dbase programmes) and the paper (dot matrix, so it's all connected) trailing all thruought the house as he draws pencil lines all over it to reconstruct programme flow. no indentation at all

The short answer to your question is no, I didn't see any evidence of improved structure or practices at all. All variables global. goto ftw. no procedures/methods/functions at all. He's never even heard of the concept when I tried to explain.

[Edit: managed to get hold of some of the code (see parent) and it looks like I was wrong on the indentation - he did start to use it at some point]


I'm not sure if that's scary or awesome. Probably both.

Thanks for sharing!


closely looking at a printout is an underrated archaic practice.


Ah yes "desk checking." When compiles took hours or would only run during the overnight batch cycle. You better believe it was worth spending some time manually reviewing your code.


To be fair I know of programmers that hack their code until it works and then they forget about it. Next time they need to change something: take the ax and hack away...


You find it hard to believe because you are convinced it can't be done other wise. Once you see some one doing it you will believe otherwise.


The version given to me by my professor was: "Think like an amateur. Execute as an expert."

Originally by Takeo Kanade. ( http://www.cs.cmu.edu/interviews/kanade/ )


I would like to make a few points , This is only possible because your dad is the only one who has to deal with this program and also he only works on this one big complex thing which he knows really well .


oh hell yeah, absolutely. This sort of code is not uncommon at all but is pretty much incomprehensible (let alone maintainable) for anyone beyond the original implementer


If one day somebody else does inherit the codebase, are there any tools out there that are designed to help people in that situation? Like something that de-spaghettifies everything?


Appropriately, it's a book: http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...

Some automatic tools could help (although I doubt thay'll work on DBase III): Static analysis to see what's there, version control to start at the top and log your way through and be able to rollback to a previous working version.

But it's at the very least weeks of pain.


I would go farther in regard to the Computer Science point.

CS programs should be burnt to the ground. And in their place we should build up three separate things. First, software trade schools that are actually good (e.g. not ITT). Second, for reals software engineering majors at colleges, that are heavy on things like practical programming, tools (version control, issue tracking, automated build systems), refactoring, teach multiple languages (javascript, python, ruby, SQL, etc.), and only delve into theoretical underpinnings as warranted (compare electrical engineering vs. physics programs). Third, legitimate Computer Science programs that are contractually limited to about 5% of the current CS student capacity for at least the next two decades and which teach a very mathematics heavy and science focused CS program and have zero expectation that the graduates of the program will go on to write software in industry after graduation.


The problem with software trade schools, even though you may not require a pure maths focus, I still expect people who program professionally to be good at maths. I would expect most programmers to analyse algorithms in a formal manner if they have to. And the people capable of doing maths at that level are not the people that traditionally go to trade schools.


To make something fast or to make something scale you don't need a proof. You need a profiler.


To a certain degree, this is true. However, a profiler won't turn the DFT into the FFT.


The FFT only needs to be invented and optimized once, though, then everyone else can just link in libfftw or kissfft.

All those software tradespeople can do the jobs that require connecting pieces together. A tradesperson can be taught enough to know which algorithms to apply in which situations.

Computer scientists become academics inventing new algorithms, and analyzing and perfecting the ad hoc algorithms created by software tradespeople.

Aside: I wish the English language would just decide that "man" and "men", when used as part of a compound word like "tradesman", is gender neutral.


> The FFT only needs to be invented and optimized once, though, then everyone else can just link in libfftw or kissfft.

> All those software tradespeople can do the jobs that require connecting pieces together. A tradesperson can be taught enough to know which algorithms to apply in which situations.

Is that a circular argument? If we had enough people cranking out smart enough algorithms (like the automatic programming example upthread), would we need so many tradespeople?


A profiler is for micro optimisations. A profiler won't let you go from bubble to quick sort for example.


A profiler will tell you if sorting actually using a meaningful amount time in your application.

If you go ahead and blindly change your bubble sorts for quick sorts then at best you may be wasting time doing something that has no effect on performance, and at worst you may be making your program slower.

Quicksort is slower than bubble sort for nearly sorted input after all.


It does give you information on where the most time is being spent, but it doesn't tell you what to implement. Without adequate algorithm knowledge you might try a micro optimisation when it really needs a completely different algorithm.

With superficial knowledge you might stick to certain rules without really understanding them. That bubble sort example you gave is a perfect example.


You are operating under the assumption that the performance of an average piece of software is algorithmically limited, this is almost never the case. In average software the core performance characteristics are typically bound by far simpler issues, such as excessive database queries.

A profiler is not good at speed up an algorithm, but a profiler is THE tool you need to speed up a system, and most optimization in the wild is system optimization, not algorithm optimization.

This particular example is a perfect case study in how excessive focus on computer science can lead you astray in software engineering.


In my view, all performance optimisations need to be driven by hard data and a profiler is just one source of this kind of data.


Maybe not, but Google will:)


I know almost no programmers who analyst algorithms in a formal manner.

Most coding doesn't require it - certainly not the kind of thing that most large companies want coders for.


N.2 on your list does already exist in the real world, albeit as a graduate program. CMU offers Software Engineering (with very minimal CS involved) at both the main and SV campuses, perhaps even at others as well.


> Computer science should only be offered as a minor. You can major in biology, minor in computer science. Major in art, minor in computer science. But you can't get a degree in CS.

The problem is not that Universities produce poor CS majors. They don't. The problem is that everyone else expects that a CS major is going to be a good commercial developer. Some are, but that's just the odds.

If you want to be a programmer, do an software ENGINEERING degree. It deals with the practical issues and you actually do lots of real, actual, programming. Or go to a media design school, where you do lots of actual web programming.

Expecting every CS major to be a great programmer, is like expecting every Physics major to be a good baseball player. Sure, (s)he knows the optimal angle to strike ball to achieve a home run, but actually doing so requires a whole of experience and real world context.

(PS. I have a physics degree, and I'm a software developer, not a baseball player. But that's because I programmed a lot for fun and profit before going to university, and CS seemed like a big backwards step. And physics is more fun.)


I just read the post that inspired this post: http://programmers.blogoverflow.com/2012/08/20-controversial.... While a lot of these opinions sound reasonable, I want to know if they're actually true. All of these statements are expert opinions, but it's unclear how many of them are backed by research. Without controlled studies, experts can easily believe incorrect things.

If you hold an opinion, look at the evidence backing it up. If it's not strong, reduce your confidence. Or even better: Gather evidence, then form your opinion. And remember: Anecdotes don't count. I wish more of my colleagues would do this, but it seems most of them haven't heard of very many software-related studies.

If you want to learn more, I recommend http://www.neverworkintheory.org/ as a starting point. After reading some papers, you'll be surprised how limited our evidence-based knowledge is. Looking at software engineering studies made me realize that I'm not allowed to poke fun at psychology anymore. Even that field is more evidence-based than ours.


  It's a mistake to introduce new programmers to OOP before they understand the basics 
  of breaking down problems and turning the solutions into code. 
Given how well I remember my classmates with little-to-none programming experience struggled with understanding pointers while not being able to write the simplest algorithms, I thought this was a no-brainer. Is it really a controversial opinion?


> Is it really a controversial opinion?

I haven't heard anyone argue against it, but for a very long time every single curriculum out there started with either Java or C++, and had to talk about object quite early on (you can't print "hello world" in Java without it, and you can't use C++ streams either).

And arguments I've heard about that practice as a proxy for your question, indicate that it is (or at least, was between the late '90s and the late '00s) controversial.


You must write a class in order to do "hello world" in Java. C++ streams don't require you to write a class, only use what appears to be a strange syntax. You can go a long way in C++ without defining any classes of your own.


You have to write a class to do "hello world" in Java, but you don't have to understand OO. If you just treat the class syntax as meaningless boilerplate you can learn a lot of programming concepts before needing to touch messaging.

Having seen 3000-line single-class Java monstrosities, I can assure you that this is not only true in theory.


I've never understood what's confusing about pointers...


1) Indirection. Pointers require thinking in a few steps. Indirection is hard. It is a vital skill in any kind of programming (or especially debugging), but it does not come naturally to people who've only ever had to deal with the concrete.

2) Early on, you learn that ints and chars and floats and some_structs are fundamentally different data types. Then suddenly you're told that int%s, char%s, float%s, some_struct%s, and even void%s are fundamentally the same. Huh?

3) The fact that C uses % both to create a pointer and to dereference. These are conflicting meanings, and the unrelatedness of those two concepts is not sufficiently explained.

EDIT: Silly HN parser. I now replaced asterisks with %s.


3) That make's sense. For example...

    int *i;
means "When you dereference i, you get an int."


When I see

   int a;
I read "create an integer variable called a".

    int *i;
means, "create a variable that when dereferenced, points to an integer".

Makes sense once you understand it, but I can definitely understand how a beginner would find it confusing.


It may be a problem with the teaching though, rather than the subject matter itself. I understand why it takes a little bit of thinking to get used to, but not why it is fundamentally hard.

Now, algorithms can be difficult to understand and many of them use pointers... Do some people confuse the 2?


I don't think the difficulty is fundamental, and I don't think anyone here claimed that it was. But it is initially challenging nonetheless.

To tie this back in to the beginning of the thread, being able to understand indirection (such as in pointers and algorithms) is a far more important skill than understanding OOP.


It's the abstraction that's difficult to get in the first place. In Basic, a variable represents at the same time the data and its memory address, the two concepts are not separated. When you introduce pointers, you split that concept in two: address and data, and to complicate you say address is some form of data. In Basic, the variables are the memory. Each variable you manipulate is equivalent to a piece of memory you can store things in or get things out. With pointers, you loose that equivalence: memory becomes invisible through the code. I can have a memory block that exists in my memory that is no longer pointed by any variable. This means that you get a separation between the way memory is and the way the code looks like. What is difficult when learning pointers is to get that extra level of abstraction: you have to look at the code as an access route to the memory, not as the memory itself.



Here's my version of his opinions, probably even more controversial :P.

CS should be offered as a major by itself. All the most interesting stuff is CS-specific with indirect applications. Working on something like automatic programming is far more exciting than working on biology or art or what have you. (I can't think of anything more awesome or more CS-only than automatic programming.)

It is a mistake to introduce programmers to OOP.

A complex compiler is awesome. A sufficiently smart compiler may be a myth, but it is a utopian myth; we should strive for it. However, I would take it even further: program synthesis is better still. I'm in the business of telling the computer what to do, not how to do it, so there should be no obvious but unnecessary correspondence between what I write and what the computer executes--they just have to have the same semantics.

You shouldn't be allowed to write a library unless you have a thorough understanding of programming languages and some relevant math. There is always relevant math. Your functions should be accompanied by useful and verifiable laws others can depend on. Or maybe everyone should be encouraged to write libraries regardless of skill level and then the libraries could be ranked a posteriori. Any other guidelines make less sense.

Pretty code is readable and readable code is pretty. If you can render your code as a nice pdf and distribute it as a paper, it's about as readable as it will ever be. Even if you can't, remember that aesthetics aren't random--there is a reason why pretty code is pretty.

Purely functional programming is a straw man. Even Haskell lets you write code that at least acts impure. Haskell is a local maximum. On the other hand: a purely functional spec that the computer uses to generate a potentially impure program should work. But I've already talked about that :P.

I don't know what a "software engineering mindeset" is. It sounds like something a manager would say. Don't do stuff a manager would say. This is unfair to good managers but still a useful guideline. Have as much fun as you can unless people's lives are on the line.

I should note that I don't even think all these opinions are true. But a belief does not have to be true to be useful. If I could boil it down to a single sentence, it would probably be: math and CS theory aren't scary and you should reject conventional "wisdom". But that would be somewhat cheap--two independent and rather unrelated clauses joined with "and" may as well be two sentences :P.

Also, there's something very appealing about throwing out intentionally extreme opinions. I can certainly see why this guy keeps on writing his blog.


Pretty code is readable and readable code is pretty. If you can render your code as a nice pdf and distribute it as a paper, it's about as readable as it will ever be. Even if you can't, remember that aesthetics aren't random--there is a reason why pretty code is pretty.

I'll restate this: "Every programmer should know more than a little bit about typography". I was amazed at how much learning design fundamentals improved me as a programmer. I've learned to communicate through code much more effectively than I ever had before. Thinking about grouping, spacing, and the like leads not only to more readable ("pretty") code but usually more efficient code as well.


Can you suggest any books?


I recommend "The Elements of Typographic Style" by Robert Bringhurst.


I wouldn't be so quick to throw away the "software engineering mindset" quite so quickly. Part of an engineering mindset is roughly "I am prepared to stand up in court and defend all the choices I made designing this product to a jury of my peers". If we can get to that point, where there is a set of guidelines so clear, and so universally accepted, that you could get 12 arbitrary software developers to uniformly agree that the designer chose well in following them, and we could actually use them to do useful design work, that would be such a huge step forward that the profession would change unrecognisably.

We're not there yet.


Not sure how you can love automatic programming but dismiss biology. Humans are an operating system that is not only time dependent, but spatial and gradient dependent self mutating automatic programming with asynchronous message passing that is also time/spatial/gradient dependent.

Oh yea, and inserting breakpoints and print statements not only take months, but also change your code in a case-by-case fashion.

It's like reverse engineering for masochists.


Great stuff. Since these are controversial opinions, I somewhat agree/disagree.

- Computer science should only be offered as a minor. Good point, assuming that you don't consider theory an end in it self. I consider theory mind opening, even if it doesn't make money per se.

- It's a mistake to introduce new programmers to OOP before they understand the basics.. Fully agree.

- You shouldn't be allowed to write a library for use by other people until you have ten years of programming under your belt. Another good point. Writing a good library (or designing an API) is one of the more challenging things you can do, because it requires a good understanding of the problem, simple design, and sensitivity to conventions (which only come from long experience).

- Superficially ugly code is irrelevant. Somewhat true, but the main problem with ugly code is that nobody wants to touch it. So if its functionally correct, readability remains irrelevant until you need to make a change.

- Purely functional programming doesn't work Agree, in the sense that there exists problems where a purely functional approach is not the best. I think you can run pretty far with purely functional though.

- A software engineering mindset can prevent you from making great things. Strictly true, but it should be said that the opposite mindset will eventually destroy the things you made.


"Superficially ugly code is irrelevant. Pretty formatting--or lack thereof--has no bearing on whether the code works and is reliable, and that kind of mechanical fiddling is better left to an automated tool."

I disagree with this one. Clear, readable code is... clear and readable. It's like not bothering to format text in a textbook because "the meat of the matter is in there, so who cares?"

Of course substance is more important than style in programming, but style also helps and is it really that much effort to make sure your code is readable for the next guy who comes along?


I agree with you. What i took away from that point was to leave it to an automated tool. Th Eclipse Java formatter was the first time I saw this work. Distribute the formatter settings among your team, make everyone set the "format on save" option, be done with it.

I just wish all languages had this kind of support.


"indent" the program (for C) has been around for a while.

The other nice thing about automated formatting is that everyone can edit code in their preferred format, as long as they convert it back before checking in (for readable diffs)


I agree with the first 10 controversial opinions, and I agree with this followup, to boot. All good points, all worthy of discussion.

As an autocrat, I'd also add another few, what I conceive to be contemporaneously controversial point of view, to the discussion:

* Code Coverage matters. Dead code is broken code. Always.

* Programming is Always in Service To The User. The User is the only way your creative, artistic, amazing, junk of spaghetti-code crud, is going to ever get Used. Use is where your software is alive. Non-use = Dead. Thus, the USER is YOUR MASTER. Serve them.

* Pretty tools are one thing, ugly tools another thing entirely. NO! WRONG! ALL TOOLS ARE TOOLS. Use what works. If you're using something because you want to, even though it sort of doesn't work, its no longer a tool, but instead an .. ingredient .. of something. Something else, perhaps something creative. Do that shit on your own time: use the tools which work, at work.

* Discussion is the only way things ever get resolved. If you hate on something about someone, discussion is the only way the problem will ever get solved, ever. Ignoring something and being afraid to discuss really secretly means 'do not want' to solve the problem. Even vile words are still yet but words, words eventually work it out. Developers who do not use words are not the scribes they're meant to be ..


Computer science should only be offered as a minor. You can major in biology, minor in computer science. Major in art, minor in computer science. But you can't get a degree in CS.

The way university is structured, it would be nearly impossible to minor in CS and learn anything low-level or advanced like computer architecture or operating systems. Perhaps you can push out a desirable employee with a practical knowledge of programming, but could you really educate true "software engineers" via a minor?


I am a bioengineer by degree, software developer by profession. Most valuable skill that I use in the day-to-day is thinking like a programmer, and understanding how to make sense of data in an intelligent way. But ultimately, this is all about solving problems - programming is simply a tool towards this end. Go up to a carpenter (and I mean a well-trained carpenter, the kind who carries more than just the PHP hammer [0]) and ask him which carpentry technique to use, and he'll tell you the right answer in a flash. But ask him whether carpentry is even the right approach for a given problem, and I wish you good luck.

Programming skills are important, but a deep knowledge of the problem domain is sometimes far more critical to being able to actually solve the problem.

[0] http://www.codinghorror.com/blog/2012/06/the-php-singularity...


This opinion about CS minor is not controversial. We are almost unanimous to say it is stupid.

AFAIK, automated pretty formatter is not controversial. I wonder if your opinion about pure functional language is controversial.

Thank you for bringing to light some real controversy. In particular the old one about compiler optimisation. See for reference http://books.google.fr/books?id=6kHs4s-79bkC&pg=PA43&...


I would actually doubt the usefulness of minors in nearly any subject (holding 2 myself).


"You shouldn't be allowed to write a library for use by other people until you have ten years of programming under your belt. If you think you know better and ignore this rule, then one day you will come to realize the mental suffering that you have inflicted upon others, and you will have to live with that knowledge for the rest of your life."

stunning.


I'm actually currently in process of writing a small library, but I have only 6 years of experience. I'm frightened.


Don't be. That idea is bollocks. Experience has approximately zero correlation with ability.

(how's that for controversial?)


Exactly. Does anybody really sit and write a libraries solo these days? There's an omission of combined experience here. 2 others like yourself would be 18 years of experience.

(I understand the combined experience thing isn't perfect logic but that's not really my point.)


Errr... I write libraries solo. Professionally. More or less full time.


So true. I've seen people who have way more than the fabled "10,000 hours" of software development experience them that continue to push out truly woeful code.


The thing those people are missing is 10,000 hours of deliberate practice. Showing up is not enough.

I'd rather work with someone who has five years of experience than someone with one year of experience repeated twenty times.


Don't be intimidated. The only way to learn is to do.

Just keep in mind that a library takes a surprising amount of time to make "good enough".

The most frequent sin in library design is to knock something up in a weekend, check that it sorta works for you, and then consider it done.


Thanks! I'm not really intimidated, I thought it was a bit comical that anything less then 10 years is not enough.

On the "good enough" aspect the least I try to do is to unit test everything I publish.


> Purely functional programming doesn't work, but if you mix in a small amount of imperative code then it does.

That isn't controversial nor is it an opinion. It's just the truth. Purely functional code has no side effects. The entire point of a program is to have side effects.


Not true. For example, a compiler can be a pure function. It accepts input that is the source code, and outputs the machine code. There's no side effect there. I admit it needs minor scaffolding to always read all of stdin first, and write all to stdout at the end, but the programme, as written by the programmer, is a pure function.

This is one of several ways that Haskell worked before there was an IO monad [1], all allowing pure functional useful programmes.

[1] S. Peyton Jones. Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in haskell. Technical report, Microsoft Research, Cambridge, 2010.


This is even the case after the introduction of the IO monad. Monads allow referentially transparent IO.

My purely functional programs actually do things! Amazing that my ivory tower allows me to say "Hello World".


Yes, the 'minor scaffolding' has side effects. That's why I said it wasn't controversial.


The side effects in the scaffolding are purely an implementation detail.

For non-interactive use (like a compiler) you could implement the program as a function taking and returning a string.


How would I invoke a compiler that isn't interactive? The act of invoking it is the interactivity.


I prefer to mix in a small amount (well, as much as I can before it starts to feel forced) if FP into my imperative programs.


The difference between a compiler not optimizing (gcc -O0) and optimizing as good as possible (gcc -O3) can be an order of magnitude of performance. That does matter in many cases.

Of course, various programs (web apps) are IO-bound. And gcc -O0 should still outperform Python/Ruby/etc.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: