Hacker News new | comments | show | ask | jobs | submit login
Simplistic programming is underrated (lemire.me)
96 points by mgdo on Dec 6, 2017 | hide | past | web | favorite | 90 comments

I don't think it's underrated; it's that it's hard to write.

You almost go full-circle. You start off writing highly procedural code (with too much branching), join the GOF-all-the-things religion and eventually revert to highly procedural code (more specifically "algorithmic code" or "functional code").

As an example, many years ago I wrote a dependency resolver. It needed to find cycles (as we had some rules that could resolve those cycles) in order to make changes in the correct order. At the time I was a junior developer in that problem space. My solution consisted of an O(scary) algorithm; consisting of classes, patterns and all the good things that make your code "simple" and "understandable" (it was not). Customers eventually threw graphs with a few million edges at it and we needed a new solution. I have no idea how I happened upon Tarjan's Strongly Connected Components Algorithm but some hundreds of lines of code was replaced with dozens. Minutes turned into milliseconds.

Here's the thing about simplicity: even though the code was simple, even I (as the author) didn't understand it at the time - I trusted the guy with the PHD. Many years later I finally understand how it works. It is an incredible masterwork of simplicity but as it turns out something that is simple is not always understandable.

Simplicity probably has a simple definition in the dictionary, but in practice it can be quite complex to make something simple. So you can't just say "make your code simple" because it's really hard to do that.

“I didn't have time to write a short letter, so I wrote a long one instead.” - Mark Twain

It's both hard and underrated - it's underrated because it's so hard that a lot of developers haven't developed the skill of writing simple code, and so don't recognize the value when others do it.

I agree. There seems to be two interpretations of "simple" - simple in being concise like that of a beautiful mathematical equation, or simple in being easy to follow and understand. I assume the latter is preferable when factoring in having other pairs of eyes reading/maintaining your code, especially since the latter does not always imply less efficient code

Actually, I think the interpretation of simplicity as ease is an awful thing in the context of programming and should be actively discouraged. Rich Hickey’s talk “simple made easy” is the best explanation I’ve ever seen of this.

Simplicity is the answer without the question.

There is a major problem with how programmers are judged.

Say you are given a seriously challenging and apparently problem; that is to say the naive approaches are all hideous train wrecks. You work on it long and hard, until you have a key insight that allows you to implement the simplest most elegant solution.

Now a naive observer will look at it, say well that's obvious, and then ask why you wasted a week to write 40 lines of code.

Anyone who thinks more lines of code equals better software should not be working in this industry. Hopefully the naive observer you're referring to would be someone with no background in software development

> someone with no background in software development

These people are often management, unfortunately.

"In science and engineering, some of the greatest discoveries seem so simple that you say to yourself, I could have thought of that. The discoverer is entitled to reply, why didn't you?" --- http://www.paulgraham.com/taste.html

Your apparent mental prowesses will fail to impress those who find it easy to do the same. I have met my share of college students and professors who excel at dropping the name of a few philosophers, at using words only 1% of the population knows about… but, at a certain level, it does nothing for them. People simply roll their eyes and move on…

Solid life advice.

Since he's a professor, I have no doubt that what he says is true in his environment.

On the other hand, it's all relative. After having worked in Silicon Valley for over 15 years, I think the more common mistake is knowing too little philosophy rather than too much. (Of course, I agree that 5 dollar words confuse issues more often than they illuminate them.)

It wouldn't hurt a lot of engineers and tech companies to learn more about the past and be more literate.

As an example, off the top of my head, here is a concept which comes up a lot in software:

"The Map is not the Territory"




You don't have to do statistics / big data for very long before you hit philosophy. It bottoms out pretty quickly, i.e. how do you learn about the world. Likewise programming is about making things, and sometimes about modelling the world, and there are a lot of useful philosophical concepts in that realm too.

This is without even getting into the "morals" of technology, which is a big thing in the news lately.

My pet peeve on HN is people writing comments like they're gonna be peer reviewed. Either the less common words that they use are obviously out of place like someone just discovered a thesaurus (conflated is my favourite example), or it's consistently formal and quasi-academic so I have to put extra effort into reading it. Either way it just makes the authour sound like a pretentions wanker.

Yep. "Orthogonal" is one of my favorites. Along with "order of magnitude".

Yeah, except orthogonal has a very precise meaning in mathematics, there is no other word that can be used to replace it when describing a multidimensional vector that is perpendicular to every other vector. This is a very important feature in that it means the vector can be modified and it won't impact any other vectors.

You probably hear it a lot, because it's generally important to make functions orthogonal with respect to each other so they may be modified without effecting one another.

I'll agree that order of magnitude is starting to feel like it's overused as rhetoric in programming circles, even though it too is a very important concept in engineering.

“order of magnitude” is a pretty useful concept, to be fair.

But it's twice as long as saying "ten times."

"The server is an order of magnitude faster than our old one."

"The server is 10 times faster than our old one."


More often heard: "The server is an order of magnitude more performant than our legacy server."

As a non-native english speaker trained in academia, I'm pretty sure that sometimes my comments must come out like that. I'm not doing it on purpose, it's just that I don't know better!

I suspect there are many others in my situation here, specially those whose native language is of latin origin, because latin-based words are considered more formal/academic in english, yet they're much easier for us to learn!

Just keep that in mind when judging other's comments please ;)

Non native english usually just has a few weird word choices as opposed to a really formal and replacing common words with much less common ones that mean roughly the same thing.

I'm talking about comments that my younger self would have written when I thought I would add weight to my argument by trying to sound smarter (so I'd be a hypocrite to judge comments harshly anyway).

> My pet peeve on HN is people writing comments like they're gonna be peer reviewed.

I consider myself guilty of doing this, but that's because (most of the time) I actually proof-read my comments before posting them, and consider whether they provide a useful addition to the discussion. If one of my comments look like they're going to be peer-reviewed, it's because they're self-reviewed, and you don't get to see the long tail of comments that I typed out and then deleted because a) they didn't add anything useful to the discussion or b) I could not express my thoughts clearly enough.

I'm talking more about how many unnecessary words I have to read than about clarity or contributing to the discussion.

You could've written something like this instead:

I'm guilty of this, but that's because I self-review and consider whether I'm actually contributing to the discussion before posting. You don't get to see vast majority of un-submitted comments because they either didn't add anything useful or they weren't clear enough.

Same message, but much easier to read.

First, you're not stupid/dumb/whatever.

This idea of simplistic programming obfuscates the reality that the enterprise of software programming is one of the most complex endeavors humans have undertook. We desire simplicity because the reality is terrifying: that programming is far too complex for us to manage. That we're not smart enough to keep up. If we keep on using simple tools this reality will only continue to get uglier.

When I first started programming it was all PEEK, POKE, GOTO 10. Life was simple. Single processor, some registers, main memory, and an incredibly slow disk drive. Mastery could be achieved with a book and a few weeks off in the summer.

Since then I've worked on Mozilla, Linux, Openstack... I've written interpreters and JIT compilers, graphics engines, REST APIs, databases, eventually consisted distributed programming systems... it has only exploded in complexity.

A modern computer is a microscopic distributed system. There are no less than four cores in most computers. Several hierarchies of caches. Multiple channels between cores, main memory, and other buses. There's a clock in there that's probably wrong and an application that is running and coordinating over the terribly slow network with other processes running on other computers... it's a bit much to take in.

And yet we manage to ship software constantly.

We can do that because of abstractions.

Some abstractions are harder to understand than others. Some are made up by other programmers and given strange names and come loaded with a bunch of jargon. Others borrow theirs from existing literature like, say, mathematics.

I think more programmers are becoming interested in functional programming techniques and are learning to get over the hump of learning them because they care deeply about reliability, correctness, and expressiveness and they still want to write quick, simple programs.

You can write 4M lines of carefully crafted C and hope you didn't miss a case or off-by-one error. And that you wrote the logic correctly.

Or you could write 30k lines of Haskell and know that only errors in the logic will be present at most. And with a little more effort you might even be able to encode your propositions in your types and cover your bases there too.

It might come with a lot of jargon and seemingly-impenetrable concepts but it's worth learning.

Simple vs. non-simple isn't the best way to think about code.

Rather I would think about the requirements of the code. Who is going to be reading it? Maintaining it? Do advanced language features give some kind of advantage (runs faster?, less likely to have bugs?, more extensible?). Will new team members likely introduce bugs? Is spaghettti code better or worse than using less known patterns/language features?

Don't use advanced language features in production code for these reasons: (i) impress people or (ii) develop your CV.

Don't avoid using advanced language features just because someone wrote a blog post saying it's a bad thing to do.

In a nutshell: Think.

The whole idea of calling functional features advanced language features is a bit ridiculous I think. Yes, something like java only recently started supporting functional programming, but the paradigm has existed for a very long time.

For me it is how those functional features are used that one needs to be careful of. I like the idea of the code that you read and think "yep that's right", rather than - OK I need half a day and a whiteboard to understand this.

However the exact same piece of code might be wrong in one team, and right for another, depending on their experience and backgrounds.

The blog post barely gets into the idea of what simplicity means in programming.

If you have any interest in the topic, I recommend Rich Hickey talk "Simplicity matters" https://www.youtube.com/watch?v=rI8tNMsozo0

My personal favorite is: Living with complexity 2010 by Don Norman (Author of Design of everyday things)


I really liked this, because it resonates with my approach.

I think I first realized the basics of this some time in college. As a teenager around '96 I discovered IRC, and hung around chatrooms being, well, a teenager. I discovered smileys, & how 2 use abbrevs to pack more in < chars. Also 1337speek... Only a few years later did I realize that what was more impressive was being able to (just as quickly) form clear complete sentences.

Also related, Pascal's comment in a quickly-written letter to a friend: "I would have written a shorter letter, but I did not have the time."

Also related, Kernighan's quote on debugging: "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

Ultimately, I must admit that I still try to show off in my work and presentation, but I have learned that a work that is beautiful by its simplicity and humility has way more staying power than a complicated and sophisticated one ;)

> Only a few years later did I realize that what was more impressive was being able to (just as quickly) form clear complete sentences.

What's even more impressive still is knowing when to use which rather than locking yourself into a prescriptivist hole.

I agree that overcomplicating code is bad, but then again, what is "simplistic"?

Let's talk Python. To me, using comprehensions is way simpler than loops. Using

  for i, item in enumerate(my_list):
    # do some stuff with i and item
is way simpler than

  for i in range(len(my_list)):
    # do some stuff with i and my_list[i]
And still, I bet that most C/Java folks will look at these things with disdain.

Also, higher-order functions. They do sound strange, at first. And yet they're a great tool to combine simple pieces of code. I'm starting to think that "elegance" is just the ability to build complex interactions of simple elements. Think chess, or languages like Lisp and Smalltalk, or Euler's identity.

You probably want to use the conceptually simplest construct your language / standard library offers. In Python that's enumerate. In C, that'd be the standard for loop. In Java that'd be the enhanced for loop. People reading your code, familiar with the language, should have to learn as little as possible to understand it.

I could make an awful enumerate implementation in C:

    #define enumerate(item, array) for (int i = 0; i < sizeof(array)/sizeof(array[0]); i++) { item = array[i];
    #define enumerate_end }
    enumerate(int number, numbers)
        printf("Number is %i\n", number);
...But then everyone reading it would have to go read my enumerate definition, whereas they'd understand the regular for loop immediately.

I think higher-order functions are a great tool when they're used to remove complexity. The tricky part is that your reader needs to already understand how they work. If adding the definition of a higher-order function would make your code longer than simply not using it, I think it's a bad case to use it.

The first is Pythonic code. I'm guessing the author isn't talking about conventional code, but a more agnostic point of creating complexity for complexity sake, which, unfortunately, is a very common issue.

The other way can also be a problem. If you are manually writing out range(len(stuff)) in Python on all your loops, you are writing code that is complicated, overly verbose, difficult to understand, and error-prone.

Actually I don't think he was successfully making that agnostic point.

It's more like: don't gold plate it, and don't use abstractions that you learnt later.

Not gold plating things seems obvious. I think there's a TDD advocate who has an example of when he created a wiki that stored its data in a text file. Purely by accident. But it worked perfectly for his use case; he didn't actually need to fire up the latest distributed NoSql blah blah blah. And perhaps that's something worth considering.

Not using abstractions learnt later in life tho. I think that's a problem. It took me years of contorting my head to fit in some of the customary Haskell abstractions like monads and functors. Today I probably couldn't describe them accurately, but I'm sure I benefit from when I think in terms of them. For instance, it seems that the code is better if I write a method which accepts an object then does something, vs writing a method which might accept an object, then tests for null, and if not null it does something. (And similarly for looping constructions for instance. You write a more superficial level of code that knows about nulls or arrays and a deeper level of code which doesn't. Directly thinking "monad" literally helps me here.) It's simpler, but it's not at all more simplistic.

And so at this point, I want to distinguish between complex/complicated and simple/simplistic. Things which are complicated or simplistic are bad because they're simpler or more complex than they should be. Things which are complex or simple are inevitable and complaining about them is unproductive, like complaining about the weather.

Counter: This is what incrementally leads to voodoo Perl scripts that only the one magician who wrote it can read. Sure all those tools are great individually. But nobody else knows what they are, so maintaining it is impossible.

In a trivial example like this, between two similar executions, I'd tend toward the C-like one for readability. Only use the language feature if it really saves a lot of lines or abstractions etc.

So java8 if I could rewrite 100 lines of code so that a whole function is just one lambda and nothing else, sure go for it. That's just a StackOverflow away for anyone. But I wouldn't toss in a lambda with a bunch of (insert new language feature) all on one line. I wouldn't expect another programmer to know all that yet.

For me, any talk about what is the best way to loop over or enumerate an array is not the most helpfull in talk about code simplicity. Using a loop, an enumeration, or a functional map, should not really matter that much, most of the time they all do similar things. I would think that any intermediate programmer recognizes them. Thinking too much about which is better, is just loosing time better spend elsewhere.

“Mr. William Faulkner got into the act by observing that you never crawl out on a limb. Said you had no courage, never been known to use a word that might send the reader to the dictionary.”

“Poor Faulkner. Does he really think big emotions come from big words? He thinks I don’t know the ten-dollar words. I know them all right. But there are older and simpler and better words, and those are the ones I use. Did you read his last book? It’s all sauce-writing now, but he was good once. Before the sauce, or when he knew how to handle it.”


I've been thinking about this lately, too.

I strongly believe this, to the extent that I have a preference for code which is simple but incorrect over complicated but correct.

Obviously simple and correct is best, but if I can't get that (at least in the short-term) I would sacrifice correctness for simplicity.

You can often incrementally improve a simple solution, but it is very difficult to simplify a complicated solution.

Discarding previous knowledge where it is applicable and simpler is obviously unwarranted. Similarly to how we utilize Newtons laws within a lower energy and mass range, we should prefer the simpler equations where they make sense. However, I'm not sure that a for loop is actually simpler than map reduce as it depends on the context. Who is reading your code? How complex is the code inside the for loop? What kind of problem are you solving? The vocabulary associated with functional programming is better equipped to handle mathematical expressions than imperative code and in this case, simpler.

In essence; adding complexity for the sake of complexity or to come of as intelligent is bad...

I've been working on a project lately that was designed by a really smart guy. It follows the OOP patterns and IoC and all of that. Trouble is you can introduce such complexity that maintenance from someone a few years later can get incredibly difficult. A lot of this code I'm working in has implicit dependencies that are really hard to trace w/o stepping through line by line. If the code was more explicit, maybe it would offend the ideologists, and maybe it wouldn't be as flexible as it is (debateable), but it sure as hell would be easier to maintain.

"If I had more time, I would have written a shorter letter."

It sounds like the author is conflating sophistication with complexity. In my experience, "simplistic" programming leads to greater complexity. It is the very sophistication and use of more advanced programming and structural patterns he rails against that in fact enables concise, maintainable code.

Nice article. Code is written for the maintenance programmer to understand. Simple is better.

I think the sentiment is also true for devops/infrastructure. I like to use the term "minimalistic" instead of simple, because I think some people associate simple with naive and slapshod.

I like minimalism too, but people associate that with laziness. You can't win!

Good thing I have zero interest in functional programming and other cyclical fads. I just concern myself with keeping my code understandable, maintainable (with the help of my teams) and tested.

"Simplisticity" is in the eyes of the beholder. (Many serious programmers see the C++ template language as overly simplistic.)

> I like the concept of “simplistic programming” by which I mean “programming that is so simple that people will criticize you for it”. At first, that sounds strange… can we really get criticized for being “too simple”? Of course, we do.

Oh man, this hits home.

I've attempted to be a voice of reason on this front so many times, I've practically given up on it. I've come to the conclusion that most programmers suffer from an intellectual Napoleon complex (which I've seen in academic Philosophy, as well). I've argued for the use of simple "for loops" in lieu of complex map-reduce logic, argued against overly-complex threading logic where threading wasn't necessary, argued against using Spark where the dataset was so tiny, it could be handled by my iPhone. All my suggestions often fall on deaf ears. The counter-argument tends to be some contrived edge-case which (by that point in the discussion), I'm simply too fatigued to argue against. I've lost this war of attrition many (many) times.

The irony is that most OSS code tends to be "so simple it's dumb" -- why? Because simple code is easy to contribute to! I think if more professional programmers actually looked at real code outside of their corporate Bitbucket, working as a programmer would be much more enjoyable.

Is map-reduce really less simple than a for-loop? I agree with your other examples, but I see these too things as totally equivalent (unless you're in a language that make a dog's dinner of this type of thing, like old C++).

I think so, consider[1]:

     var r = array.map(x => x*x).reduce((total, num) => total + num, 0);
vs its for-loop counterpart:

    var r = 0.0;
    for (var j = 0; j < array.length;j++){
        var x = array[j];
        r += x*x;
Maybe I'm just not as smart as everyone else, but I only have to keep like two things in my mind when looking at the for loop (easy), whereas in the map-reduce case, I need to have a very "holistic" view of what's going on (hard).

[1] https://stackoverflow.com/questions/17546450/for-array-is-it...

I think your giving a bad map reduce example. It is written in a very immature functional programming style.

It should be written like this, IMO:

  var r = array
          .reduce(sum, 0);

1. Separating out actions by line

2. Pass named functions that describe what they do

Eliminating the map, named functions or not, makes it pretty clear:

var r = array.reduce((total, num) => total + (num * num), 0);

This is interesting to me because the map reduce has way less cognitive load for me.

With map reduce I see the following:

- We've got an array

- We're going to loop through it and square everything

- We're going to take that and add it all together

- The result is r

When I read the for-loop counterpart:

- We have a variable called r that we're initializing to 0

- We're starting a loop with j as the index initialized to 0 iterating through an array

- We've got a variable x on each iteration that is the current position in the array

- We're adding the square of x to r

Maybe it's a personal quirk. for-loops make me feel stupid.

The main advantage that I’ve seen when writing and reading functional instead of procedural/imperative code, is that the intention is usually clear.

We have a list of numbers that we want to square and add up, instead of the more mechanical steps with the loop approach.

The biggest issue I have is with this step:

> - We're going to take that and add it all together

Here, we have an imaginary variable: namely, "that." Something that we need to keep track of and is not explicitly referenced anywhere. It's relatively easy when you only have one of these things, but with complex map-reducer logic, you can end up having a whole bunch of these.

In the for-loop, everything I use is explicitly referenced. There's no "that" -- everything is "r" or "x" or "j."

I dunno, I find the second one harder. Because now you have to know about array indices and things like that. The for loop forces you to deal with petty details. The map reduce lets someone else deal with that.

I tend to think there's some additional overhead from Javascript's particular object oriented implementation of mapping and reducing when compared with haskell's i.e.

     f xs = foldr + 0 (map (\x -> x * x) xs)
seems a bit easier to me.

Is it habit? Probably. But I really find it hard to understand why maps and folds are simpler than manual iterative loops.

since a map and a sum are both folds, you can simplify it further:

f = foldr (\x total -> total + (x * x)) 0

but this is actually a counter argument. look at all these ways of doing this simple operation with higher order functions. plenty of replies from intellectual napoleons like me, too. whereas with the for loop, there's nothing much to say, which seems like a good thing.

i genuinely don't know how to feel about this.

I agree, but I think it depends a lot on your background as well as the background of the people who are likely to be looking at and modifying the code. Virtually any programmer is going to be able to eventually read and understand the second version while requiring little extra research, but for a lot of programmers they will have no idea how to reason about the first and no idea how to use/modify/extend it, so it is just some weird black-box.

Yes, obviously they can learn, and I don't think learning a bad thing (It's absolutely a necessary skill for a programmer, or anybody), but if the only place in the code that ever uses map/reduce is that one location, then programmers that come by it are likely not going to get a lot out of the time they're going to sink trying to figure it out. If it was just a basic loop, then overall people will likely spend less time figuring it out. For one piece of code it's not that big of a deal, but if your code-base is just full of fun little one-liners everywhere, it quickly becomes a mess to understand.

But with that, if your code is already full of stuff like map/reduce, then I think using map/reduce instead of the basic loop may very-well be preferred for readability (Though of course, your citation brings up performance concerns, which should also be taken into consideration). I think most important is just keeping the code-base consistent.

Except the over head of that loop is orders of magnitude higher than the reduction. Did you get the bounds right; what happens if you accidently start at 1 or end at <=; what if you're nested; did you use i and j in the right place? The nooks for bugs in your "simple" loop are so profound that, in a language with expressive maps, I wouldn't accept that code in a review.

"Simple" is great until you're debugging off by one errors or any other errors between chair and keyboard. Complexity, in the sense that this guy rails against, is expressiveness. And, expressive code beats "simple" code every day of the week.

--Quick edit Imagine that loop in assembly. It's objectively "less complex" yet you didn't express it that way. Why? I think the answer to that question is the heart of the positive argument for programming abstractions (and the "complexity" they bring).

> Except the over head of that loop is orders of magnitude higher than the reduction.

Except that it's not. Read the SO answer I linked.

> The nooks for bugs in your "simple" loop are so profound that, in a language with expressive maps, I wouldn't accept that code in a review.

Yep, I've definitely worked with people like you. With all due respect, I just think you're fundamentally wrong. Off-by-1 errors will happen no matter what, and even a rudimentary test suite can catch those.

Off by one errors can't happen in the functional version. There's no reason to take a fatalist stance here. Just saying "well those programmers should write more tests" is silly when you could say, well why not tests and write code that is naturally hardened against foot guns.

I was referring to mental overhead; though in a more modern language like rust the performance implications would be far less profound than the linked SO discussion.

> Off by one errors can't happen in the functional version.

No? Not only can they happen, but I'd argue that it's easier for them to occur when using `reduce`, because you have to remember the initializer. Omitting it has no effect in the given example, but the given example is contrived. It's not rare to be dealing with something more complex than a list of numbers, so let's try that instead.

Let's say your items are a list of nodes. You want to sum their payloads to determine their aggregate cost. And, like jasonkostempski[1], you see `map` as superfluous. You've got this. You write:

    nodes.reduce((sum, x) => sum + x.value)
Uh-oh. Now the fact that there is no initializer does affect the result.

1. https://news.ycombinator.com/item?id=15867033

> No? Not only can they happen, but I'd argue that it's easier for them to occur when using `reduce`, because you have to remember the initializer.

I would say that's not an off by one error. Its a similar scale and type of error, but not the same.

Er, the single-parameter version of reduce differs from the two-parameter version in that the semantics are changed to literally skip the callback invocation where the first element gets passed as x. Instead of being called n times, it gets called n-1 times. It's one of the most clear-cut manifestations of fences vs posts—the quintessential off-by-one problem.

It's much more insidious than an off-by-one error in a for-loop - the initializer is hidden!

Unless you use a language that actually enforces you to pass required parameters to a function call.

... which still wouldn't fix this problem, because the initializer is not a required parameter. It's allowed to be omitted by design, to signify that the first element shall be skipped and never passed as the second arg to the callback—or to put it another way, instead of being called once for each element, it's meant to handle the gaps between elements.

> Just saying "well those programmers should write more tests" is silly when you could say, well why not tests and write code that is naturally hardened against foot guns.

I don't think off-by-1 errors are that big of a foot gun, compared to, like, having a null as part of the language, or pointers, or about a million other things.

> I was referring to mental overhead

Gotcha. I still think that "linear step-by-step" thinking like "you take stuff out of a bucket, and then do stuff to it" is easier to parse than map-reduce, which requires you to have a "big picture" view of the whole process.

Why does a programmer need to know an index counter exists if the index counter isn't used inside the for loop?

Why have a compiler at all if you want to defer all error-checking to runtime?

Might look nicer to write the first version as:

  let square = (n)   => n*n
  let sum    = (s,n) => s+n
  let r = array.map (square).reduce (sum,0)

Both map-reduce and for-loop are idioms. That is, they are things that one fluent in the language should be able to read at a glance. I suspect that dvt has more experience with languages where the for-loop is the standard idiom.

And in fact, many languages have only one or the other as a standard idiom. So it's really pretty simple: Use the one that's a standard idiom in the language that you're using.

Your language offers both? It's still pretty simple: Use the one that's better understood by the people that you're working with.

They understand both? The project still probably leans one way or the other. Prefer the one that is more commonly used in the project. Projects wind up using subsets of languages; using things outside the subset, while valid, looks "odd" within the project.

It's all about minimizing the cognitive load on the reader. It's not about what the writer prefers.

C++ guy but anecdotally, the first one really does read better.

The for loop is definitely easier to debug if that is necessary. Map reduce is more concise of course.

What about:

(loop for x in a summing ( * x x ))

I mean, it can be, right? Take todays Advent of Code problem, where you want to grab the index of the largest element in a sequence breaking ties toward early elements, then re-distribute that element one at a time over other positions starting from the largest element (looping around as required).

I can write that using various maps, folds, flat_maps, and reduces, but no one else would understand what is going on in the code. Maybe someone will post the right idiomatic functional way to do that, but nothing elegant comes to mind.

I agree: one of these things is not like the other. Map / filter / collect is not better because it is "fancy" but because those operations match the semantic intent of most operations on collections while being constrained to do a single thing. Conversely, for loops are unconstrained and reflect no intent in particular.

People are constantly quizzed on map-reduce, spark type of stuff in interviews.

Resume 1:

> Did simplistic programming using for loops.

Resume 2:

> Map reduce using spark.

Guess which resume gets picked up by filters? Noone wants to get caught with their pants down when they have to look for their next job.

Pressure to "keep up" is unfortunate reality of our profession. Resume Driven Development is a thing.

I can't... agree with you more. Just lost one recently. There was no need to go "microservice" but we did (I have nothing against that structure). Argued against it, got shot down and they went around and are implementing it now. Happened so many times and it's.. dead tiring.

Most corporate problems are most likely solved by pretty simple code. The problem is that you are not building your resume that way and your future job searches will suffer. To stay up to date I often squeeze things into my work that are not strictly necessary.

suffer from an intellectual Napoleon complex

I call it the low talent to ego ratio.

While I disagree on the map -> reduce I agree on the other positions you articulated.

what's funny is that the code compiles the small map-reduce and the for loop to exactly the same bytecode probably. :)

Not my area of expertise, so I want to ask this question: Do compiler optimizations exist to automatically parallelize a for loop, which can be parallelized, no matter how to programmer wrote their for loop?

If not, then map / reduce offers non-trivial performance benefits over a for loop. Mapping, as far as I'm aware, is "embarrassingly parallel", assuming the function that's mapped is pure? I think?

in perl6 this is done with the hyper operator, but for a small data set a for loop (non parallel) will almost always just run faster.

No, map reduce can be run in parallel, at least in some cases.

Can these experts advocating “simplicity” offer a concrete way to measure it?

I think we teach Newton's 3 laws because they are largely correct.

Sorry for my little pedantic observation:

The author wants the word "simple" here, not "simplistic".

The reason I even bring it up is that simplistic means overly simple, and that's a real problem in software development in my experience.

Overly simple means you aren't actually dealing with significant issues the problem space. If anyone ends up using your simplistic software, they will have to solve the problems you ignore in a non-optimal way.

Simplistic -- overly simple -- is the flip-side to overly complex and can cause just as many problems since it pushed extra complications elsewhere.

What we'd like is software that is as simple as it can be, but no simpler.

I think he _does_ mean "simplistic." He means "so simple that people criticize you for it" which lines up with simplistic.

Kiss rules are good for lifestyle and everything!

"If you can write a program using nothing but the simplest syntax, it is a net win."

In other words, write everything in Brainfuck FTW? Seriously?

That's not simple. It's a very minimalistic syntax, but it is definitely not simple.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact