Hacker News new | past | comments | ask | show | jobs | submit login

I like the last one better. I think it reads more like "assign one of these values to sides[i]" and less like "do one of these three things".

"By this latter I mean, if you change the code a little bit, you don't have to rewrite it; it looks basically the same."

I'd say that's kind of the point though. The first one would look "basically the same" if in the last else it assigned to sides[j] instead of sides[i]. In the last one it would look very different if it was about "doing stuff" rather than choosing which value should go to sides[i].

My point is that when you are writing complicated production code, and you are a good programmer such that your rate of features successfully implemented is high, then you will often be going to old code and changing that code to behave somewhat differently than it was before.

When you do this, you want that old code to be like putty. You want to bend it into a new shape without having to break it and start over. Sometimes it really is better to break it, if bending would be too messy or cause problems later or otherwise sets off a red flag in your head. But if you have to break and re-make everything all the time, you won't be a very fast programmer. So you learn how to bend things, elegantly.

And after a while of this, you learn how to write code that is more amenable to elegant bending in the first place. When you type code, you are not just implementing a specific piece of functionality; you are implementing that functionality plus provision for unknown future times when you will need to come back and make the code different.

(To link this more thoroughly to the previous comment: it happens all the time that you write code that is not really about doing stuff, but then you later need to make that code be about doing stuff. Sometimes this is for shipping functionality reasons, sometimes it is just to temporarily insert hooks for debugging. Declaring in advance that this code shall never be about doing stuff is usually a mistake.)

It strikes me that your argument taken to the extreme is that everybody should program in assembly language because you can do anything, anytime, anywhere. Well, at least as far as control flow structures are concerned. Certainly C is preferable to C++ if you want simple and malleable code.

Do you also prefer if-else to switch statements? (I'm not sure.)

Do you like to use goto? (I doubt it.)

Do you eschew the use of classes and inheritance? (I doubt it.)

Do you keep all your code in one file? (I'd be very surprised.)

My point with these semi-facetious questions is that structure is important and regularity in a codebase does wonders for comprehensibility and maintainability. I agree with you for the case in which you do actually have something where the underlying structure is likely to change, in that then it does make sense to write it with malleability in mind.

Consider something like command line options processing. You have 50 options. You might want a 51st option. It makes sense to use the most regular structures you can so that people don't start special-casing stuff in the middle of it all. That, or to use an options library that defines the allowable formats for you.

Part of me thinks you're just having an allergic reaction to ?: simply because it is pretty unusual the first time you see it used seriously. But it can simplify so much:


  if (a)
      return b;
  return c;

  return a ? b : c;

No, you clearly didn't understand my point. I am talking about maximizing the rate of successful features implemented. Programming things in assembly language is obviously not going to do that.

I don't know, man. I have 31 years of programming experience. I am not detecting from your argument that you have anywhere near this level of experience, so I am inclined not to get into this discussion. But I will say that your code example at the end of your comment is exactly what I am talking about. It happens all the time that I want to put something in front of 'return b' (or, in fact, I just want to put a breakpoint on that line in the debugger! Not going to happen in your second example...)

I'm not even clear that it's clear I didn't understand your point. I guess I have trouble communicating effectively. I thought your point was that flexible control flow structures like if-else allow you to change code quickly and more easily in the future. Wasn't that the point?

Assuming I have understood your preference for flexible code, I am simply stating my belief that it is useful to balance this with rigid code. Sometimes the flexible code is simpler, easier to read and write, and more maintainable, sometimes the rigid code is simpler, easier to read and write, and more maintainable. It depends on what you are trying to do. In this particular case I prefer the ?:.

The assembly argument was based on a notion I had that any time you use something more complicated than test and branch for control flow you are introducing rigidity into your code. Often this lets you be productive too; switch statements are a good example.

And lastly, let's assume I am 14 years old. Is that really the way a wise teacher talks to the young and inexperienced? My favorite teachers ask questions to check and help deepen my understanding, and sometimes it's revealed that they're learning something too.

It depends on what you are trying to do.

Which you can't know in advance, which is the point. Since you don't know what you might want to do with stuff down the road, you end up writing code in a style that is less concise but more "putty-like". I am an amateur programmer with derpy years of experience, but I found myself often taking "shortcuts" out after a while, because they made working with the code harder.. just like the poster said, I realize that trying to be super clever all the time isn't that helpful. I guess you could say in places that are subject to constant change or lots of copy and paste, I trade screen space for putty-ness, and when something turns out to not really change anymore, or generally gets in they way of more interesting things, I compact it a bit.

Sometimes rigid code is simpler, sure. But what I am arguing is that it is almost never more debuggable / maintainable.

What I am saying is not specific to test and branch, though test and branch is great because it gives you these big code blocks into which you can insert more code and it's clear where that code lives and under what circumstances it runs. Which is something you don't get in assembly language, which is part of why the assembly language reply is a goofy straw-man argument.

Yes, my reply was a bit irritable; I would definitely prefer to have a reasonable discussion, but the assembly-language thing was the first volley in being unreasonable. Putting up a straw man like that is an attempt to win the argument, not an attempt to understand the other person's position. I detected this and decided, well, if that's the position, then it is useless trying to make further / deeper rational arguments, so I am just going to say, this comes from a lot of experience, so take it or leave it.

As fatbird replied, "This is shitty." (I can't reply to his reply yet because of the timed reply thing, so I am including it here.) Maybe it is shitty, I don't know, but it's true and sometimes you just have to say the true thing to be expedient and get on with life.

I don't have time to teach people on the internet how to program. I work my ass off for 4 years at a time to build and ship games that are critically acclaimed and played by millions of people. These are the kinds of things most programmers wish they had the opportunity to work on, and wish that they knew how to build. (Often programmers think they know how to build these things, and then they go try, and they fail. It is a lot harder than one thinks). I am not saying this to brag, because I honestly don't feel braggy about it right now. It's just fact. I am pretty good at programming (probably not as good as Carmack) and I have worked really hard for a long time to be as good as I am. Meanwhile I am also trying to be pretty good at game design, and oh yeah, running a software company.

So when I give advice like this, and someone retorts, and it seems to be coming from a place of lesser experience, it is not really worth my time to get into a serious argument. I am not going to learn anything. I have been in the place where I had that kind of opinion, many years ago, and then I learned more. Fine. I can either be polite and quiet about it, or say something a little bit blunt and rude, in the hope that the other person (and maybe any bystanders to the conversation) will seriously re-consider what was said in light of the new information that it comes from someone who is maybe not a joker. I can't spend a lot more time than that teaching everyone in the internet how to program, because it takes almost all the energy I can muster just to build software. (Though occasionally I do write up stuff about how to program, and give lectures bearing on that subject, like this one: http://the-witness.net/news/2011/06/how-to-program-independe...).

Of course this don't-get-into-the-argument strategy of mine has at least partially failed, since here I am typing out this really long reply. I don't know.

I used the word extreme in the assembly language argument to indicate that I understood you weren't actually advocating people program in assembly. So I don't consider it a straw man. Rather, it was an example of extremely flexible control flow. Sometimes this is what you need. Sometimes you need to modify the stack so that a function is called in a different way. Sometimes gotos can provide significantly more efficient code. Sometimes vtables are too expensive. Rails is at the opposite end of the spectrum, it is very rigid. As a refutation to your argument against rigid code, a lot of people consider Rails code to be debuggable and maintainable. Personally, I like a middle ground and try to be aware of the costs and benefits of making and using cookie cutters, an example of which is chained ?:.

I was not asking questions to "destroy your argument", I was trying to establish different contributors to rigidity / flexibility. It seemed to me that in your argument you were dismissing rigidity - certainly you weren't praising it - and I wanted to point out that in the use of any control flow more complicated than test and branch you are in fact relying on rigidity. If this is a "straw man" in your eyes, then so be it. Having functions introduces rigidity! Even if-else enforces some things. If anything, I was just actually interested in the topic, because I hadn't really thought much about the tradeoffs of rigid vs. flexible in those specific terms and I wanted to explore them a bit.

What if I revealed myself to you, and then you were like, oh shit, I better take what he says a little bit more seriously, wouldn't that just be embarrassing? I don't want to do that to you.

Creepy reply is creepy.

About this:

"Plus, I mean, what if I revealed myself to you, and then you were like, oh shit, I better take what he says a little bit more seriously, wouldn't that just be embarrassing? I don't want to do that to you."

No, by all means, go ahead. I am interested in having a productive discussion about programming, so if you can share your experience in a way that convinces me, I am totally open to it. If it turns out I am wrong, I won't be embarrassed, I will just change my opinion so that I am not wrong any more. This is how one becomes a good programmer in the first place: by paying attention to what is empirically true, rather than what one is originally taught or what seems exciting or what is in theory better.

John Hughes makes a good argument in favour of rigidity in "Why Functional Programming matter" (e.g. http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pd...)

I cut out the creepy parts, I agree that they were. I can't reveal my experience without breaking anonymity. If you need to know about my experience for my examples and arguments to hold water, then my points aren't very good anyway, which is why I prefer to stay anonymous. Good luck with The Witness and I'm sorry that this was painful. Symbolism and meaning in games are important and I appreciate what you are doing.

I have 31 years of programming experience. I am not detecting from your argument that you have anywhere near this level of experience, so I am inclined not to get into this discussion.

This is shitty.

Not to repeat myself, but:

    if (a) 
      printf("returning b");
    return a ? b : c;
...makes it easy to modify the side effects of the return without muddying the semantics.

Now, let us suppose that expression a has side-effects...

"But that's just stupid!"

Yes. Now, given that a has side-effects...

Assuming a has type int:

  int a_val = a;
  if (a_val)
    printf ("returning b");
  return a_val ? b : c;
Anyway, it's not useful for complicated code that needs to do lots of stuff. It's useful for simple code that ends up being more verbose with if-else. It's also useful for enforcing behaviour.

Suppose I want to `printf("Returning. A: %i", a)`. Same answer.

Not to mention that it's essential for initializing some constant variables; I have yet to find a better way to do:

  const int count = argc > 1 ? atoi(argv[1]) : 1000000;
For quick test programs where I might want to change some number of iterations without having to recompile, but I also don't want to have to provide an argument every time I run it.

You're assuming that the first argument here is an integer value. For a single, one off, testing haress application I wouldn't worry too much about applying a full-blown semantic code style.

Is there anything wrong with keeping the default value as a separate constant?

  const int kDefaultValue = 1000000;
  int value = kDefaultValue;
  if (argc > 1) {
    value = std::atoi(argv[1]);
  const int count = value;
Personally I much prefer using [program options][1] from the boost project (since we are talking about C++ here). Most projects already have this as a dependency, and its quite easy to setup. It also properly handles your types.

[1]: http://www.boost.org/doc/libs/1_52_0/doc/html/program_option...

Perhaps I should have clarified: when I said quick test program, I was referring to something I would write to convince myself of some minor detail (sometimes just checking compiler warts or algorithm performance) and that piece of code would never be seen or run by anyone but me. I know that the first argument will always be an integer, because I'm the only one who will pass it arguments! I'm also (kind of unreasonably) a const nazi. So given the option, where I can take a shortcut (I know, bad, bad programmer), but still do "proper" programming (consting everything by default is my SOP), I'll do it. And I'll never release that code (I'm starting to regret posting that snippet here . . .).

I will agree, though, option parsing libraries are a definite must for released software. I like Boost, but using even small parts of it tends to pull everything in, and at least for my current project, we are trying to minimize dependencies (it's a library). I had to fight for Boost::regex, and only got it as a fallback for compiler versions that don't have regex.

I wasn't lambasting you, and I wouldn't ever regret posting a snippet. It helps drive the conversation.

I understand and agree with Boost injecting a whole lot of dependencies, and for a small testing library or executable I would probably got the same route that you did. I'm also a pedantic const nazi at times, and merely replied back as it makes me feel that, in some else's code, they may find this useful.

If you take a look at the Doom3 code there are even places where they stray from the code style guide when it makes sense. I'm more of a proponent of "in the moment" styling to make sure that it matches the rest of the project, or at the very least, component that I am working on.

Well, I might be unusual, but I don't object to goto, I don't uses classes and inheritance much - and I like code that's all in one file, too. (Means I can keep an eye on it.) So it might not surprise you to hear that I'd vote for the code that uses the if statement :) - in fact, I don't really understand why the second would ever be preferable, outside some unlikely case specially engineered to prove me wrong.

I can pretty much promise you that at some point, you will find yourself wanting to debug one case, but not the other. If not this specific code, then some other code very much like it. If not you, then somebody else! But if the code is all on one line, how will you stop on one case and not the other? In every native debugger I've ever used, you can't. You need to split it into separate statements, so you can breakpoint each case separately. So in the long run, the code is very likely to be changed, so it'll probably end up the first way eventually. So why not just write it that way to start with?

("Ah, ah, but but, but have you heard of this thing called a conditional breakpoint?" - yes, I have, thank you.)

It's just not even funny to think about how much of my time this specific issue has wasted over the years :(

All you need to do is hoist the conditionals out of the ?: if you have a bug and you want to inspect them, either with printf or a debugger. Everything else is available before the statement.

Personally I don't use debuggers except to get a backtrace from a core dump once in a while. I use various forms of automated testing and logging to stderr/stdout. They just aren't very useful with concurrency bugs.

The "all you need to do" bit is exactly the way this has wasted my time. I'm not saying that doing it is particularly complicated, nor that any individual occurrence is especially onerous, and in fact it usually isn't (though you do lose your state, and that's annoying) - but the time taken mounts up!

Ah well. I've said my piece. If your experience hasn't convinced you, like mine convinced me, then I imagine yours was a lot more fun ;)

Note that my response was mainly to the "hard to understand because questionmarks" part of your post. I think that's a pretty weak argument, and that being clear about which part of the code depends on the ifs easily makes up for weird syntax or whatever. The "that is goofy, this is mature" stuff is ridicilous.

I get that there are other reasons for sometimes choosing if-statements instead of if-expressions in cases like this. But then it quickly comes down to a bunch of technicalities (language can't do this, debugger can't do that, ...), and really, even if we're talking C++ stuff only I would not agree with it as general advice.

Precisely. And there is even a little bit more to it.

1. it reads "assign one of these values to sides[i]". 2. it would not allow some other peoples spurious code into the assignment. which is a good thing. 3. space is used to convey meaning; note how sides[i] stands next to dists[i]; ternary operation is formatted as a table, etc.

If you have people adding "spurious" stuff to your code, you have a bigger problem than coding style. Why would you need locks on your door if you live on a planet with only friends on it? If you don't; why don't you?

Not allowing other code in is a bad thing. See my putty reply above.

It depends.

But in this particular case, some trivial extra code inserted inside that if statement may break things. Like breaking vectorization of that 'for' (note, that we are going over vertices).

When you are writing the code, some times you may want to put additional constraints on the allowed operations, modifications, etc. A trivial example in C++ would be using 'const &' instead of '&'.

This is true (especially: breaking vectorization). But premature optimization is not a good idea, and if you are doing real optimization, you are going to rewrite that piece of code 10 times anyway, so it is in a different class of problem and the putty stuff I was saying before does not apply (i.e. this code is in the 1% or so of the codebase that is highly performance-sensitive).

Optimized code is just a different thing from general code (if one is a productive programmer).

And again it depends. I would return to a simple example of passing a parameter by a const reference.

When you are writing this const in the "const std::string &name" you are 1) constraining developers from breaking things 2) making the code more efficient 3) providing clues to other developers 4) providing clues to optimizer.

All of these points are important to some degree. And it is just the same, when you are writing this assignment.

And writing efficient code that provides all these clues in every way possible (including consistently and meaningfully arranging the white spaces) is certainly a good idea. Funny thing, that with experience it doesn't take extra time to do that. You just write it, and it comes out in the right way: readable, efficient and optimal down to CPU microcode and aligned memory accesses.

I think we have different ideas about what constitutes optimization.

Sure, putting const in parameter declarations is easy to do. It may even buy you a little bit of speed because the compiler is a little bit clearer about pointer aliasing and whatever. But it's not going to make a difference in the equivalence classes of slow code / fast code / Really Fast Code.

Serious hardcore optimization usually involves changing the way the problem is solved to something different than the way the old code thought about it: either constraining the problem space further, or attacking it from a different direction. This usually involves rewriting everything since there are so many cross-cutting concerns. Sometimes one has to do this several times to figure out which way is really fastest. Microoptimization things, like whether you used const somewhere or not, are much smaller details that have correspondingly small effects.

For code that one isn't specifically optimizing, speed probably doesn't matter. There was an exception to this, where we hit a little bit of a bump in the late-2000s on platforms with in-order CPUs like the PlayStation 3 and Xbox 360, because they have such a high penalty on cache misses; this tended to make general-purpose code slower and result in much flatter profiles. But now we are pretty much out of that era.

In general, const is more of a protection than an optimization. This is especially true heading into the massively parallel future, where const just sort of tells you whether some code is known for sure to run safely in parallel or not... and running safely in parallel matters tremendously more to overall speed than the number of instructions in that bit of code, or whatever. (Anyway, C++ is not at all a viable language in the massively parallel future... so that is going to be interesting.)

Thanks. I've really enjoyed that discussion with you. And by the way, your comment mentioning several decades of experience was certainly appropriate.

Yeah, this discussion went a lot better than the other one! Certainly more enjoyable, anyway.

Constness of parameters has zero effect on optimisation. Const declarations may provide a small benefit.

Go ahead and apply const as you see fit, but don't expect better code out of it.

Would you guarantee that for every implementation of every C++ compiler ever? Especially considering that constexpr is already there?

So, I'd rather leave these clues to the compiler/optimizer.

Constness of pointer or reference parameters has no optimisation effect because it doesn't convey any reliable information. It doesn't indicate whether the thing is written to in the body of the function (the compiler already has that information). It doesn't indicate whether the thing can be written to through another pointer (const parameters may be aliased). And it doesn't indicate whether the thing is written to through the same pointer by a callee (because the language contains const_cast).

Since the very thing that const is supposed to indicate is "no writes", and it doesn't do that, const annotations provide zero information. Thus they add no scope for optimisation.

constexpr isn't relevant to this issue. Since sane programs don't spend any appreciable time calculating constants, it's also rather uninteresting for the purpose of making programs run fast. As far as I can see its only practical purpose is to expand the set of fixed terms allowed as template arguments.

Would you guarantee that optimizers wouldn't use heuristics at IPO stage?

Either way, you are right, and my comment should have been: 1) constraint 2) providing clues to developers 3) providing clues to optimizer.

Without yielding useful information, const annotations can't hope to have any positive effect on compiler optimisations, fancy or not.

I agree that const is useful for the other reasons you mention.

Who's to say that code isn't the tenth iteration? Especially given the context of the discussion...

Then why are you using a statically-typed language?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact