Hacker News new | past | comments | ask | show | jobs | submit login
Why I prefer objects over switch statements (enmascript.com)
112 points by octosphere 29 days ago | hide | past | web | favorite | 99 comments



Firstly, switches are arguably way, way (waaaay!) more readable than the code proposed. It's a matter of taste for sure, but arguably a basic construct of all curly languages and more is easier to understand than a rather esoteric code where a pattern is substituted for a keyword. Of course though, it is true that switch statements in JavaScript have issues with breaks and code blocks, but any half decent linter will tell you about those.

Most importantly though, the performance profile of the solution proposed scares me a lot. To understand why, consider that it is not uncommon for switches to be JITted as:

- If statements and gotos for small number of options or - Collections of lambdas for high number of choices (note, much more optimized than the lambdas proposed, very likely!)

The reason they are built this way is performance (another commenter ...commented that performance doesn't matter - they are wrong, a switch can be nested in a hot loop ran millions of times and they do matter). Therefore, it's easily arguable that the presented pattern will significantly worse in some cases (few choices) and that should not be underestimated.


> switches are arguably way, way (waaaay!) more readable than the code proposed

The point in the article is not that switches are less readable, but that switches can be "less" readable. Or—more accurately—they can look like they're readable but obscure unexpected subtleties

The object syntax may be slightly less readable than the switch in its simplest, most well-written form, but the point is that unlike the switch, it always unambiguously does what you expect.

It's also worth noting that the object syntax is extremely idiomatic in modern JS. It may look less readable to a generalist, but to anyone regularly maintaining JS it's far more familiar than the switch. (I guess this isn't so much a point in it's favour, it's more a point against modern JS being easy to pick up, but hey).

> it is true that switch statements in JavaScript have issues with breaks and code blocks, but any half decent linter will tell you about those.

While you have a point about linters, and I do use a strict one in every project, I'm much more comfortable with it being a safety net than the first line of defence.

Also, those problems with breaks and code blocks are hardly unique to JavaScript.


switch statements are way more readable than what is demonstrated in the article. It's not even close.


I disagree :) It's easier for me to look at an object and its contents and see where it is referenced. I think switch statements are ugly. I think if/else blocks are ugly. I prefer boolean statements when possible, although definitely not nested ternaries.

I'm sure I'll get flamed, but I really liked this:

  const getPosition = position =>
      ({
          first: 'first',
          second: 'second',
          third: 'third'
      }[position] || 'infinite');


I think equating ugly with hard-to-read is a common mistake. Many things are "ugly" but easy to read (e.g. in design circles, newbies often do far-too-small fonts with low contrast because it looks pretty, whereas accessibility guidelines tell you to do the opposite; in programming, readability of point-free style has more to do with the reader's ML-vs-Algol experience than how elegant FP may be, and educators often start w/ procedural style precisely because reading the flow of the program matches the students' expectations that things ought to be read top to bottom, rather than peeled through layers of abstractions)

Honestly, I find that rating the readability of the object versions vs the switch statement is bikeshedding. This is perfectly readable and doesn't involve going back and forth with my eyes to figure out the data flow or figure out if I have mismatched brackets or what have you:

    const getPosition = position => {
      switch (position) {
        case 'first':
        case 'second':
        case 'third':
          return position;
        default:
          return 'infinite';        
      }
    }
Another thing that is worth mentioning is that objects consume memory and allocating memory in JS for this is just wasting orders of magnitudes more cycles and sacrificing throughput for no good reason - other than to try to be clever and avoid an idiomatic and optimizable construct.


>I think equating ugly with hard-to-read is a common mistake

This is just a semantics argument. I can replace "ugly" with "hard-to-read" if you want. I think it's both, in this instance.

>Honestly, I find that rating the readability of the object versions vs the switch statement is bikeshedding.

I agree in the context of code review, but since this is what we're talking about I figured I'd share my opinion.

>Another thing that is worth mentioning is that objects consume memory and allocating memory in JS for this is just wasting orders of magnitudes more cycles and sacrificing throughput for no good reason - other than to try to be clever and avoid an idiomatic and optimizable construct.

Premature optimization. If you need to optimize, then do it. Otherwise you should prioritize what you consider understandable and easy-to-write code. I guarantee the vast majority of javascript written would not be ill-affected by this.


I'd ask you to reconsider what an optimization is, and to also consider the concept of "premature cleverification".

A switch statement is a pretty idiomatic choice for the example. Merely knowing more about the low level cost of different options does not make a snippet an optimization.

Examples of actual optimizations (from real world code I've seen) would include using bitmaps, lookup tables, large regexps, tries and binary search. All of these have the property of obscuring the search space in the name of performance, and many come with trade-offs such as increased startup cost or code size. Choosing to use a trie here, for example, without considering its trade offs would be a premature optimization. A switch statement is at worst a refactor that maintains the same algorithm.


What happens when your values are computed? Or expensive to compute, and your map starts to get big? E.g.:

    {'a': foo(), 'b': bar() }[...]
If foo() or bar() ever introduce side-effects or become expensive, you're gonna have a hell of a time figuring out what's going wrong.


If your values are expensive to compute like that, then you'd have the object return a function that would call foo(), bar(), and so on; and you wouldn't call them all to setup the map in the first place.


This is in fact demonstrated in the second example in "Working with functions"...


My personal preference is if-else/switch blocks with enums. Doing conditionals with string parsing just seems gross to me, but I find conditionals easy to read. And the benefits of using an enum is that you can easily find where it is used, which makes reading the code and adding or removing options easier.


And the problem with switch in example is solved with a set of brackets. Why is that so bad? That code with brackets would be clear.


It always surprises me how few people use brackets on switch statements when the situation calls for one.


Plus, you can solve all the issues the article points out using linting tools like ESLint. Rules like no-case-declarations, no-duplicate-case, and no-fallthrough already exist for this exact purpose.


> another commenter ...commented that performance doesn't matter - they are wrong, a switch can be nested in a hot loop ran millions of times and they do matter

I'm curious how often you find yourself dealing with loops that run millions of times? I think the majority of loops I've written don't need to deal with millions of iterations; most of them probably only rarely break 1000 iterations, and I know for sure that a lot of them can't exceed 100 iterations because of limits in the data.

Seems to me that using a switch over another structure for performance at the expense of readability or maintainability is an example of premature optimization unless you're positive the condition is going to be in a hot loop.


> Seems to me that using a switch over another structure for performance at the expense of readability or maintainability

But is not more readable or even more maintainable.

Plus there is absolutely no optimization involved, it's just another way of expressing it.


I was replying to a comment about the performance implications. If you decide on one method over another for performance reasons, I think it's an optimization.

> But is not more readable or even more maintainable.

Both are a matter of opinion and depend on the specific use-case.


> I'm curious how often you find yourself dealing with loops that run millions of times? I think the majority of loops I've written don't need to deal with millions of iterations

It's trivial to come up with 80s-level counter-examples to this.

A megapixel image, for example, is tiny.


It's also trivial to come up with countless "80s-level" examples that never iterate more than 100 times.

Iterating over a megapixel image isn't a common scenario unless you're processing a lot of large images. Obviously you should optimize hot code paths.


Uh, emulation and scripting language byte code engines? State machines? Filtering?

There are plenty of reasons to iterate a switch statement, even much more than millions of times.


> emulation and scripting language byte code engines

You get that these are outliers right? The majority of software doesn't need to concern itself with these problems. They're examples of software that deals with analyzing and running other software.

I'm not sure why I'm bothering to respond to these comments. This whole discussion has apparently been system developers telling app developers they need to start micro-optimizing their cold loops because "what if your user clicks the button 12 million times in under a minute".


It's actually a big issue in any performance critical regions of code. I've come across this in both the network stack and the file system (user space and kernel).

Switches can sometimes be optimised away into O(1) using clever static compile time tricks, but if you rely on jump indirection you get branch misdirection penalties, or even worse for a ladder of if's, you are doing O(N) work (where N is number of cases).


Seems like the key is "performance critical regions of code". Yeah optimize code that is performance critical. Don't optimize the other 99%.


I agree with you, yeah. The problem is sometimes you have to go back to old code and optimise it, because today's performance expectations are higher than yesterdays -- whether because hardware has become more powerful or software is being pushed to its limits. So I'm a bit biased, thinking, "If only this was optimised day 1", but truthfully, recognising the requirements and reacting to the performance needs as they arise can be less wasteful than having ironclad performance requirements that won't be met or needed to be met.


Anything with successive iterations / optimization phases, like simulation or even simple machine learning.


That's why I said "majority". Obviously you should optimize code that is run frequently. I could be wrong, but last time I checked most applications don't involve even "simple machine learning" and when they do it's in a library or service that abstracts those concerns away.


moderately sized database, pixels on a screen


Both of these seem like cases where you would know you're in a hot loop. I wasn't really clear in my comment that of course you should optimize loops that you know get run millions of times. I was saying: pick your tools based on the context of the code you're writing. Most simple applications involve tens or hundreds of loops that rarely iterate over more than a few values and don't get run more than once every few seconds.


single-page web app doing n times the amount of work it should be doing because components are synced over an event bus with bindings set up wrong in a way that nobody has bothered to debug


This wouldn't really be Hacker News unless an arrogant Lisp devotee shows up and claims that Lisp does everything better, so I'll now try to be that arrogant Lisp devotee.

Sean Johnson has given a fantastic talk about pattern matching in Clojure:

https://www.youtube.com/watch?v=n7aE6k8o_BU

He offers some interesting comparisons between Clojure and Erlang.

Going even further, I recently discovered Dynamatch:

https://github.com/metasoarous/dynamatch

"Dynamatch addresses these challenges by enforcing a certain structure in our ordering of match clauses such that many of the incidental complexities of order dependence in a dynamically extensible pattern matching system become more manageable."

Sometimes it seems like Erlang or Haskell has the last word in Pattern Matching, but I'm not aware of anything like Dynamatch in those languages.


No thread on HN would be complete without the JavaScript programmer jumping in to "well actually" your response, so I formally nominate myself to fill that role. ECMAScript is getting pattern matching! It's currently a stage 1 proposal with backing from Microsoft, npm, and Facebook.

https://github.com/tc39/proposal-pattern-matching


That's interesting, but makes me wonder: how does this spec cover these cases:

- select for a map with key "a" and not "b"

- select for a map with key "a" and possibly "b"

- select for a map with key "a" and possibly any other key.


It's the first step into pattern matching, so I'm not surprised that it isn't as complete as other languages' implementations. But even without further language modification, I'd bet there are some tricksy things you could do to replicate that behavior. As the saying goes, "JavaScript finds a way"


I was researching clojure vs nodejs for a webapp, but found that the benefits clojure provides are very small and not enough compared to the benefits you get with JS the ecosystem. And Javascript 2019 is actually nice.


The usual problem with such comparisons is that in order to understand the benefits of Clojure/ClojureScript, you have to use those tools to write a reasonably large system. Otherwise the comparisons are superficial.

The other problem is that most comparisons are apples-to-oranges. Does core.async sound kind-of-like async/await? Write a checkmark in a comparison table and move on. But reality is nothing like it.

Every time I have to write some JavaScript (interop) code, I am amazed that people put up with all the incidental complexity. I mean, just recently I had to implement three different ways of accessing data which was essentially in an array. Three various pieces of imperative code with iteration. In ClojureScript that would have been zero code, because everything that is "array-like" can be accessed as a seq. That sort of thing does not come up in superficial comparisons.


I agree, Clojure is technically superior in almost every way when compared to JS IMO, and would much prefer to write Clojure than JS. My poor formulated point was that, sometimes the ecosystem is more important. An experienced dev that already knows how to do everything in the domain he's working on won't have much trouble with Clojure, but you don't always know how to do some things and this is were ecosystem(third party APIs, docs, books, libs, forums, etc...) helps greatly.

Also for Clojure and Clojurescript you need to at least have some good level read proficiency in Java and Javascript.


I don't think pointing the way to a better understanding of the problem is arrogance, especially when these type of posts effectively just add to other learners' misunderstandings. I can tell the poster is still learning due to that title, and because they call a switch statement on constants "typical" (as opposed to seeing it as a broken implementation of match).

The fundamental issue is called the "expression problem", and arises because the problem of assigning behavior is two dimensional (one dimension is the types/cases, the second dimension is the methods/operations), and possibly open along either dimension. Match works better when the methods/operations are open. Objects work better when the types/cases are open. If they're both open, then you need to figure out which one to make less open. At best, you can carve off partial sections where one particular dimension is open by fixing the other dimension, etc.

CLOS kind of punts and has you express each element of the matrix on its own. Which doesn't actually solve the problem, but at least makes it symmetrical.

That is the state of the art, AFAIK. Fixing the problem along either dimension is enough to make a workable language, but neither one is "better". There could be something better, but we haven't found it, and we're certainly not going to find it if people don't appreciate the whole problem!


What about method dispatch in CLOS, pretty sure that counts as "does everything better" - at least when it comes to mapping tuples of values to methods.


Method dispatch is arguably a form of pattern matching: pattern matching on the type signature of the argument tuple. (With eql specializers, on the object/numeric identities of the arguments, not only types).


And with some CLOS extensions, it can also be extended to other filters besides just eql specializers (this is also in Clojure's implementation of multiple dispatch).


http://www.p-cos.net/documents/filtered-dispatch.pdf

The link I meant to include before.


lots of research about the subject lots https://duckduckgo.com/?q=pattern+matching+compilation&atb=v...

David Nolen publicly used some paper about optimized PM


Hmm. It seems to me that you failed at the "arrogant" part.


≥Object lookups are fast and they're faster as their size grow

It's difficult to imagine a switch large enough where the performance difference would matter, but this ignores the memory required to store the lookup in the first place. In all of the switch examples a switch is more straightforward.

In the latter examples (see the Boolean example) we now perform the lookup twice if the value is present. I feel like this is just another case of "use the right tool for the job".


A lot of C/C++/Fortran compilers have very well-developed handling for huge switch statements because of generated code. Interpreters and other finite state machines, etc etc.


Sure. I didn't want to go into too much depth on performance here as I'm a C and C<variant> guy primarily, only writing JS when I have to.


I seem to remember some colleagues once getting performance problems because V8 didn't optimize switch-cases with more than 512(?) cases. Which was easily solved by adding an "if", splitting the table into two, until V8 removed or raised that limit.


If you're using an object, the object and any closures in it also need to be constructed on each execution. If the object is large, you're not just doing a lookup, you're constructing a full map object.

With a switch, there is no up-front allocation, case expressions are only evaluated if the case is reached, and the body of a case is only evaluated if the case is executed. The lookups are hardly the performance concern.


Of course, you could allocate once by extracting it or with an IIFE + closure from TFA's functions.

    lookup = (() => {
       map = { a: 1, b: 2 }
       return (key) => map[key] || 'not found'
    })()


That helps solve the performance issue within one execution of the surrounding function, but if the surrounding function is called many times, you're more introducing more closures and more indirection. Switch statements involve no allocation. Closures objects, and function calls aren't free.


Well, in my example, the surrounding function is instantly invoked once and cannot be invoked again. It's merely a useful pattern to avoid polluting the top-level with extraction.


I mis-read the title as meaning a preference for using classes to handle each case over ML-style switch/case statements.

The JavaScript 'object' here is called "map" or "dictionary" etc. in other programming languages. (And the article's technique is fine).


While you can use any value as an object property key in JavaScript, I’m not convinced the negative performance tradeoffs are worth the supposed benefits in the article.

I appreciate `switch` - with its case-fallthrough surprises beginners but most C-style languages all share this quirk - and modern-day compilers and linters will gladly remind you that usung Duff’s Device-type tricks in JS don’t work.

As an aside, in C#, a string-based switch statement is actually compiled to a lazily-initialised hidden Dictionary<String,Int32> object where the values are the real integer case values - so kinda similar to the linked article - except without the runtime possibly reallocating and reinitialising the dictionary object on every invocation.


This is one nice thing about Go, which uses explicit fall through with break being there default. You can still fall through if you want.

However, I've come to love pattern matching in Rust. It prevents fall through entirely, which makes it unnecessary to check if the code is doing something clever with fall through, which makes the code more simple to parse.


> While you can use any value as an object property key in JavaScript

Strings and Symbols only. Everything else is converted to a string, which works most of the time for lookups, but can fail depending on the stringification or if you're pulling keys back out of the object.


The author's basic complaint with the Javascript switch statement is that it's essentially a structured series of 'goto' statements. They think that each 'case' should behave as an 'if-else' where each case has its own lexical scope, and the control flow doesn't leak from one case to another. They think that pattern matching is a better model for 'switch' in practice.

How often do you use a 'switch' statement whose cases don't always end in 'return' or 'break'? The "coroutines in C" [0] article is a clever use of switch-case as a goto, but it seems like you need to invent new types of control flow to use 'goto' properly. Does anyone have other clever uses of 'switch'?

[0] https://www.chiark.greenend.org.uk/~sgtatham/coroutines.html


This is my favorite from a library I wrote ~6 years ago:

https://github.com/ricardobeat/require-tree/blob/master/inde...

The goal is to accept a 'filter' argument that can either be a string, an array of strings, a regular expression, or a filter function. It fully uses fall-through and the multiple entry points. I find it magical, in that it turns all of those into a function so the remainder of the code doesn't have to care, and it's not any less efficient. Similar feeling to finding a use case for 'Infinity' :)

    switch (type(filter)) {
        case 'array'    : filter = filter.join('|')
        case 'string'   : filter = new RegExp('^('+filter.replace('*', '.*')+')$', 'i')
        case 'regexp'   : filter = filter.exec.bind(filter)
        case 'function' : return filter
    }


This is one of the most amazing pieces of code I have ever seen


I would like it better if switch statements used the keyword `continue` to fallthrough and `break` is implied.

Switch statements are much easier to maintain if you keep them one or two lines long, and just have it immediately delegate to a function call then break. I think that's true for almost any code-flow syntax: if/elseif/else blocks, various loops. They all break down quickly if you have too much in them.


I agree. Their complaint on the use cases (pun!) for switch seem to ignore the flexibility of the syntax. I don't think the requirement to include breaks/returns is a problem if you know how it's intended to work. I prefer switches for the easy to follow flow but also the ability to "waterfall" multiple conditions, which is syntactically clunky with other approaches.


> How often do you use a 'switch' statement whose cases don't always end in 'return' or 'break'?

I do, occasionally. I prefer the inverted golang switch where fallthrough needs to be specified, since these cases (heh) are generally the minority.


I mean, they're both useful for different purposes.

When you're dealing with 4 possible values, each of which will result in wildly different code (e.g. evaluating the value of a options variable, or handling error codes that mean very different things), then switch is clearly the way to go.

When you're dealing with 20 or 200 different values, all of which map to a few similar variations, then defining an object or array lookup is clearly preferable.

"Preferring" objects over switch statements is like saying you prefer bitmaps over vector drawings -- it's nonsensical. Different tools are better for different jobs.


I remember getting in arguments when I was doing F# over my use of Maps of functions to basically handle dynamic dispatch. I would construct a map like `let handlers = [| "somekey",func1; "otherKey", func2 |] |> Map.ofArray`, and then when I needed to handle something down the line, I would write something along the lines of:

let myHandler = defaultArg (Map.tryFind theKey handlers) (fun x -> //default stuff)

myHandler theValue ....

I liked this approach, since I could dynamically add functionality, and it could be completely decoupled from the business logic, and I didn't have to use strings for keys, but my coworkers didn't like that how dynamic it was, since in fairness, it did sometimes make it a bit more difficult to figure out which path the code was going to go down.

Never really determined who was "right" in this case, but this post reminded me of that.


Alternatively titled: "How to go to war with your linter, type checker and other static analysis tools and win"


I wrote an old computer emulator in javascript. The inner loop is a large switch statement where each case handles one of 256 opcodes. Firefox handled it fine, but chrome performance was poor. It turns out that above a 128-way switch, the jit gives up (or it seemed to).

First I tried doing what the author suggested -- having 256 routines, and a dispatch table. Chrome performance got better, and Firefox performance got worse.

In the end, the fastest thing to do was to have "if (opcode < 128) { 128-way switch } else { 128-way switch }".

That was 2014, so likely things have changed.


Ironically, that's precisely how a lot of old Z80 and 6502 programs managed conditional execution. We used to call it a 'jump table'.


In Swift, switch is my new favorite tool, combined with Swift's enums and amazingly strong native pattern matching. It has absolutely changed how I write code, and makes me wish it were a tool I could reach for whenever I'm working in another language. Some examples: http://austinzheng.com/2014/12/16/swift-pattern-matching-swi...


Came here to say this. Many of the complaints about switch statements are about classic, kinda terrible, implementations of switch statements. Swift gets that right. Like, really super right.


I'm conflicted. Swift's switch is powerful, but it's also complex, and even after using it for years, I still have to occasionally look up the syntax for some variant. Other HLLs have other ways of accomplishing these tasks that work just as well.

The whole language feels like they crammed every possible feature into every other feature, as a cartesian product of syntax, rather than as Lisp or Tcl or Forth does, with simple syntax that's flexible so everything naturally works everywhere. Someone even made an http://fuckingifcaseletsyntax.com for Swift, so I don't think I'm alone here.

You can really see the C legacy by the name and overall structure. I still miss the simplicity and flexibility of COND, and :keywords. It's nice that Swift can identify unhandled enum cases, I guess, but I can't say that's ever been a problem I've run into.

Most of the examples here I would prefer to write as a dictionary literal (more declarative), or possibly a method on the enum (easy in Swift). It's only single-dispatch, but it's still much better than burying functionality inside single-use, untestable switch statements in the middle of a func. If something is useful enough to justify writing 8 or 10 lines of code to handle a set of cases, then I guarantee I'm going to want to evaluate it in the debugger next week.

The older I get, the less Turing-complete code I want to write. Code is a liability. Constant tables are pure value. Switch, then, is the worst: it takes something which looks very much like a constant table, and forces it to be code.


The article provided some good examples of times when code readability can be increased without a bulky switch statement. However, there are times when I find the switch statement most closely communicates the idea of what needs to occur to some developer in the future.


As someone who has migrated between heavy use of these patterns in the past (object -> switch), I'd like to provide a few counterpoints.

First, and this is more of a general observation for any kind of programming content, these pompous-sounding abstract value judgements need to stop:

    1. Is more structured.
    2. Scales better.
    3. Is easier to maintain.
    4. Is easier to test.
    5. Is safer, has less side effects and risks.
Regarding `switch`, only the last is a fact and that's because of the `break` statement peril. Still there aren't really side effects or other 'risks' involved. Everything else is completely subjective and not supported by the examples above - I, for example, find switch easier to maintain as you don't need to juggle variables defined outside the object to keep it clean.

Second, these articles use innocuous examples that don't reflect real use cases, and hence fail to demonstrate their utility. You'll find a ton of switch statements in any kind of parser since it's the perfect construct for the occasion where each branch can wildly differ in content and complexity, and might embed flow control that would complicate the object-based version:

    switch (node.type) {
      case "Identifier":
      case "ObjectPattern":
      case "ArrayPattern":
        break

      case "ObjectExpression":
        node.type = "ObjectPattern";
        for (var i = 0; i < node.properties.length; i++) {
          ...
        }
        break

      case "ArrayExpression":
        ...
    }

Finally, `switch` is wonderful when paired with `return`, since it eliminates point 5 above. Sample taken from a project I have lying around:

    switch (unit) {
        case 's': return value * 1000;
        case 'm': return value * 1000 * 60;
        case 'h': return value * 1000 * 60 * 60;
        case 'd': return value * 1000 * 60 * 60 * 24;
        default : return null;
    }
With the key lookup, you'd also end up precomputing all of those values (imagine that's a slightly more expensive operation than simple math), or turning each one into a function. Another good example is the state reducer pattern:

    switch (action.type) {
        case 'ADD':
            return state.concat(action.payload);
        case 'REMOVE':
            return state.filter(item => item.id !== action.payload.id);
        default:
            return state;
    }
The key lookup pattern can hold its own in the simple cases, but it's hard to justify it with anything more than stylistic preference.


This is exactly correct. The cost of holding the entire object in memory, for any significantly complex statement, is going to add up quick if you're dealing with large numbers of users/pageviews/etc. The cost of pre-calculating everything in the object for any complex math similarly gets large when you're dealing with something large-scale. Early returns matter a lot with scale, and make [code line of sight](https://medium.com/@matryer/line-of-sight-in-code-186dd7cdea...) much clearer -- which matters a lot of if your team scale is larger, as in many companies that might have multiple teams working on one shared codebase.


I don't know javascript, but in the proposed "structured" code, doesn't it execute initialization statements for every potential cases even when you call it only once? I.e.,

    const getValue = type => {
        const email = () => 'myemail@gmail.com';
        const password = () => '12345';
        ....
Now "const email = ..." will be executed every time even when you're just asking for password. Eventually, such a code "scales", become a bloated behemoth with twelve cases, called a hundred time deep inside a server, initializing everything every time it is called, with potential side effects...

...and then one day a starry-eyed new hire looks at the top-level code, thinking "Heh, this is an internal graph server, why does it need customer email addresses?", removes the top-level config line, and then suddenly all internal dashboards go blank because they can't read email addresses.

...Yeah, you can probably tell that I'm not a fan of this technique.


Fun fact: There's an optimization technique for OO languages called polymorphic inline caching that boils down to... a switch statement that speeds up method dispatch by branching directly to a method implementation for one of a few common types. If the object is none of those types, it falls through to a more conventional method lookup.


The discussion is missing the most important point, IMHO: readability and maintainability of the larger system in the long term.

I used to love objects and multimethods (or single-dispatch multimethods for those more limited languages). But then I ended up debugging a large code base which used them extensively. It is a nightmare: by reading the code, there is no way to find out what all the dispatch options are, and without interactively debugging it there is no way to see which code will get called (inheritance messes things up greatly).

I think performance is secondary to these problems, so these days I prefer switch statements (or pattern matching), for their simplicity and reliability.


I've never used this pattern in Javascript, but it's somewhat common in Python because of how handy the dictionary.get() method is. A few problems TFA solves in JS are much easier with .get(), like defaults and false values.


Plus the fact that python doesn't have a switch statement so you kind of have to use a dict if you want that functionality without a long chain of if/elif/else.


This is some kind of funny joke article


Isn't this basically a convoluted way to end up with a dictionary or command pattern?


Objects vs switch is not either/or. They both have valid use cases.


I don’t know, feels like a step back to me.

It could be as well lookups on a hashmap.

Not very expressive.


I don't think any one pattern rules out any other, but this is a good one for people to keep in mind as I think sometimes it makes more sense. Heck, there are even some cases where an object lookup makes more sense than an if-then-else statement.

I'm also a fan of other patterns like early returns, but few people ever seem to do it.


I like early returns as well, especially in short functions where there is not a lot of code between returns. In other words if I can see all returns in a function on one screen of text then I'm OK with it and I think it improves readability.

On the other hand if I have to scroll up and down a lot to see all return paths then that's a problem. But I generally find the problem is that the function is too long/does too much and should be broken down into smaller pieces.


Another compelling argument for the same position:

https://toddmotto.com/deprecating-the-switch-statement-for-o...


Ah, but can objects be used to implement Duff's Device?


Kinda-sorta:

    function psuedoduff(count) {
        var x = 0;
        var n = Math.floor((count + 7) / 8);
        var o = {
            0: function() { x++; o[7]() },
            7: function() { x++; o[6]() },
            6: function() { x++; o[5]() },
            5: function() { x++; o[4]() },
            4: function() { x++; o[3]() },
            3: function() { x++; o[2]() },
            2: function() { x++; o[1]() },
            1: function() { x++; n--; if (n > 0) { o[0]() }},
        }

        if (count > 0) {
            o[count%8]();
        }

        return x;
    }
That just increments to show the point, it would be easy to make it do some sort of real work.

Of course you'll run out of recursion at some point. Trampolining could fix that. I leave it as an exercise for the reader whether that makes it more or less silly.

I'm also of course aware that Duff's device doesn't make function calls, to which I vigorously handwave while chanting "Monads Are Programmable Semicolons" in my "Lambda The Ultimate" T-Shirt.

Lambda.



For mapping from one value to another, sure. My irk with this solution is that none of their "object" solutions actually does the same thing as the switch statement, namely logging the state in various manners to console.

Now, they could, of course, do it using the function method, but they did not. If the functional code had logs in them it would become quite apparent that a bunch of if-else statements would probably do the job better.


Speaking of code readability, the article uses a function in the example called "isNotOpenSource()". If you have a function that returns a boolean, it's best not to put another boolean in the function name i.e. "isOpenSource()" is cleaner and more readable.

Honestly, it's a code smell that makes me trust my initial reaction that the switch statement is indeed more readable than the authors solution.


Take it one step further, and move the switched-over value to the top:

  ((o) => o[expr_to_switch_over()])({
    opt1: () => {
      stmts();
    },

    opt2: () => {
      stmts();
    },

    opt3: () => {
      stmts();
    }
  })


And then you can call it a “switch statement”!


I like doing this, and find it fits reasonably well with a functional programming style: it is to `switch` what ternaries are to `if/then/else` (i.e. a nice way to select values; not necessarily nice for executing statements).


You type far more code (which in JS matters A LOT) for a less flexible solution and for such a simple task.

I completely disagree.


I pity the devs inheriting his code.


I love this, but I feel TypeScript would be very unhappy with such a thing.


Between TSLint's no-switch-case-fall-through and an exhaustiveness check[1] I don't think any of these things are really problems with switch statements in TypeScript

[1] Putting a function like this in a default case will enforce that all cases are declared at compile time:

    function exhaustive(exp: never): void {}


Type it as `Record<string, R>` where `R` is your return type and you’d be fine in most cases. You could even make `R` a union type if necessary.


I prefer if/elseif over switch statements.

ok, ok, I'll go sit in the corner.


Polymorphism vs pattern matching: fight




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: