Hacker News new | comments | show | ask | jobs | submit login
Overthinking Fizzbuzz with Monoids (fayr.am)
123 points by KirinDave 5 months ago | hide | past | web | favorite | 82 comments



Python can produce extensible solutions! It can!

(Edit from sillier comment.)

  def fizzbuzz(n):
      outstring = ""
      outputs = {5: "buzz", 3: "fizz", 7: "bar"}
      for x in sorted(outputs.keys()):
          if n % x == 0:
              outstring += outputs[x]
      if outstring:
          return outstring
      return n
Then all you gotta do to add more prime numbers is stick them in the dict (or factorize composite numbers you want to put in). Which is more idiomatic in python than endless if elif elif elif elif.


Hold up, "idiomatic Python" and no generator comprehensions?!

    def fizzbuzz(i):
      outputs = [(3, 'Fizz'), (5, 'Buzz'), (7, 'Bar')]
      return ''.join(word for x, word in outputs if i % x == 0) or str(i)


It was over 5 years ago and these are more accepted now.


Sorry it wasn't meant very seriously, just me expressing a caricature of idiomatic Python :)

So disclaimer: good old `for` loops are often a better choice unless you're doing a simple modification or filtering of another sequence.


Really? I actually liked that solution, I think it's both readable and clever. The rule priority is defined by the order of your list of tuples ('fizzbuzz' vs. 'buzzfizz'), so easy to change. That's superior to the solution above with sorted(outputs.keys()).


I would say my solution is borderline. It requires maybe one or two extra reads to grok in its entirety due to multiple conditionals, one of which is in a loop and the other outside. Here's a more readable version for someone looking at it the first time, or who is not too used to Python:

    OUTPUTS = [(3, 'Fizz'), (5, 'Buzz'), (7, 'Bar')]
    def fizzbuzz(i):
      result = ''
      for x, word in OUTPUTS:
        if i % x == 0:
          result += word
      return result or str(i)
Even the final `or` could be broken out to an `if not`, but in this case it's so simple I prefer putting it together, much like a coalescing operator.

And then for shits and giggles we can say screw it and put it all in one line:

    print '\n'.join(''.join(word for x, word in [(3, 'Fizz'), (5, 'Buzz'), (7, 'Bar')] if i % x == 0) or str(i) for i in xrange(1, 101))


Heh, I actually like blixt's solution better than mine too. :-)


Here's mine. Almost exactly the same as yours but written a tiny bit different. Also I wrote the complete program.

    #!/usr/bin/env python3
    
    # Word of factor
    wof = { 3: 'Fizz', 5: 'Buzz', 7: 'Bar' }
    
    def fizz (i):
    
        out = ''.join([wof[k] for k in sorted(wof.keys()) if not(i % k)])
    
        return out or str(i)
    
    for i in range(1, 101):
    
        print(fizz(i))
Some notes on my version:

1. I used dicts like the parent to yours. Tuples are almost just as fine and you do get a little bit nicer code in the part that you say

    word for x, word in outputs
where our say

    wof[k] for k in sorted(wof.keys())
However, when I wrote mine I imagined myself sitting in front of an interactive interpreter session.

Now let's say that we've each typed in the code we have here and then the interviewer says "now make it say Bazinga when the number is divisible by 19, and keep the order like with FizzBuzzBar".

Since my wof is global, I can add to it directly

    wof[19] = 'Bazinga'
But let's pretend that yours was global as well

    outputs.append((19, 'Bazinga'))
So aside from the global / not global we are on equal ground thus far.

But then the interviewer says "now make it say Fnord when the number is divisible by 13, and keep the order still".

Then once again I can simply

    wof[13] = 'Fnord'
Whereas if you did

    outputs.append((13, 'Fnord'))
then now you are out of order.

So you have to do some extra steps to ensure it's in order. Either you have to redeclare the whole thing (imagine there were one million pairs of numbers and strings!), or you have to sort it, or you have to find where you are going to insert and split it in two and join it around.

Not saying that mine is in any way better, just felt like sharing how I thought about this :)

2. (Wow note #1 was looong, sorry 'bout that.) I prefer splitting up my code like

    out = ''.join([wof[k] for k in sorted(wof.keys()) if not(i % k)])

    return out or str(i)
as opposed to your all-at-once

    return ''.join(word for x, word in outputs if i % x == 0) or str(i)
In my opinion splitting it exactly the way I did is more readable. Again, not a critique against what you wrote. Esp. since you said it yourself that your code was a bit tongue-in-cheek.

3. Even when split like I did there, I actually don't think my

    out = ''.join([wof[k] for k in sorted(wof.keys()) if not(i % k)])
is all that nice either.

One thing, which others have pointed out about this often is that too much is happening in one line but also there is another issue.

We all make typos or logic errors every now and then. In my exprience it's much more difficult to debug list comprehensions compared to... well whatever we call that which is not list comprehensions. Both with regards to error messages and with regards to following what's going on. I maintain that this is true even if you have a habit of using a lot of list comprehensions.


Cool, this is another valid way of doing it and I suppose it comes down to preference.

A few notes I'd like to mention though, that are not relevant to our toy examples, but may have implications in a production environment:

1. Unless you intend to store the resulting list from a list comprehension, it's good practice to prefer generator comprehensions because they don't grow in memory as the number of iterations increases. Obviously in our examples the sample size is so small that it doesn't matter, but the difference between ''.join(x for x in y) and ''.join([x for x in y]) can grow very large if y consists of many thousands of items.

Here's an example with just integers which requires over a million items before it starts becoming a concern – ultimately it depends on the memory footprint of each item:

    In [2]: %memit sum(x for x in xrange(1234567))
    peak memory: 46.87 MiB, increment: 0.00 MiB
    
    In [3]: %memit sum([x for x in xrange(1234567)])
    peak memory: 68.52 MiB, increment: 11.78 MiB
2. I still prefer the list of tuples in this situation as you have control of word order (you may not want to go through the factors in ascending order). There's [x, z].insert(1, y) which would change the list to [x, y, z] to avoid having to add to the end. Finally, because it's a list you can very easily do a one-time cost in-place sort:

    [(5, 'Buzz'), (3, 'Fizz')].sort(key=lambda t: t[0])
Again, it's silly to talk about performance in our toy examples, but small details like these can actually have memory and execution time implications if you don't consider the differences.

Ultimately, the best advice is to always write the code as readable and maintainable as possible and then optimize. Especially when dealing with a language like Python which was built for expressiveness, not leading performance.


By the way, speaking of sum, I find it a bit strange that Python allows

    'hello' + 'world'
but not

    sum(['hello', 'world'])
Intuitively I would have expected the latter to be possible given the former but I guess it comes down to how the + operator and the sum function are implemented in Python, such that counter to my expectation sum is not a function that "applies the + operator" to it's arguments. The notion that this is how it should work stems from my impression that "sum" belongs to the same family as do "map", "reduce" and "apply" -- that these are somehow "functional" in nature in the sense that is observed in the Lisp family of languages.


I guess sum was only implemented to support numeric values. However you can easily roll your own:

    >>> def add(it):
    ...   return reduce(lambda x, y: x + y, it)
    ...
    >>> add(['hello ', 'world'])
    'hello world'
    >>> add(x for x in xrange(10))
    45
Edit: almost forgot a possibly even more Pythonic way:

    >>> import operator
    >>> def add(it):
    ...   return reduce(operator.add, it)


    >>> import operator
    >>> def add(it):
    ...   return reduce(operator.add, it)
Ooh, I like this one. Thanks!


Looping back to why sum doesn't work with strings, it looks like they implemented it like this:

    >>> def sum(it):
    ...   return reduce(operator.add, it, 0)
Basically the first value is always added to 0. This has one important difference which can be a valid compromise if you expect to almost always sum numbers. If you try to call add([]) without the initial 0, you'll get an error because there's no way to know what the zero value is.

In a typed language you could use type inference and use the inferred type's zero value (if the language has such a concept for all types like for example Go does). In Python I guess you could fall back to None, but then you'd have code that doesn't behave consistently for all inputs.


The notion that that is how it work probably stems from you not having flunked computer science.


Thanks for the response, I appreciated it :)


Associative arrays form a monoid, so this code is (perhaps surprisingly) a special case of the monoid library.

It has been awhile, but I remember not including a version like this because it's a bit intense to throw a sort (either implicit or explicit) into this without even commenting on it.


You can form a monoid with almost everything. It'S not a very interesting concept that requires more attention than maybe reading the definition. These blogposts are pointless in my opinion. The title makes as much sense as "Overthinking cooking with edibles".


Monoids are certainly not the most "interesting" thing, but neither are arrays, or even sets. But they're quite important and useful in a parametrically polymorpic world. That's why it's worth introducing them to people new to FP.

Ultimately, these sorts of posts are feeders to conversations about other bigger abstractions like Functors.

The title and the post both grant that for fizzbuzz as asked in many settings, but without a simple domain these conversations tend to explode in complexity. So there it is.


Yea, I'm all for overthinking things, but as a solution to fizzbuzz, this article is silly. You don't need any complicated control flow or logic, or functional primitives, for an extensible, readable fizzbuzz.


Well, of course you do not "need" solid engineering to express the solution to a trivial problem. The point is not what you need to solve fizzbuzz, it's to give people a taste of what it feels like to do so.

And the result is: "oh, it feels normal, but I write less of my own code and reuse existing control flow code." Which is a lot of the point. It's also to point out simple abstractions like Monoid can find uses even in simple problems.


The thing is, you're not only complicating your code by introducing monoids, you're unnecessarily restricting yourself. The only thing you need is higher-order functions to be sufficiently general:

  let every m s i =
    if i mod m = 0 then Some s else None

  let fallback i l =
    if l = [] then [ string_of_int i ] else l

  let join l = String.concat "" l

  let fizzbuzz gens i =
    List.filter_map (fun f -> f i) gens |> fallback i |> join

  let main () =
    let fizzbuzzstd = fizzbuzz [ every 5 "Fizz"; every 3 "Buzz" ] in
    for i = 1 to 100 do
      print_endline (fizzbuzzstd i)
    done
  
I've pulled out `join` and `fallback` as separate functions to show that you can also parameterize the code over these functions. And you can use other patterns instead of "every n-th entry" by using generators other than `every`.

This approach can make joining O(n) as opposed to O(n^2) (naive concatenation) or O(n*log(n)) (ropes) worst case and is more general. For example, you can also generate numbered lists for each line ("1. Fizz 2. Buzz"). Yeah, that's silly for FizzBuzz, but we're sort of assuming that this is a placeholder for a real problem that we may want to generalize or tune for performance.

And if it's actually a monoid and we know nothing more about it than that, we can just go with `List.fold_left` as the join function. Works for all monoids, too, without mentioning the word.


First of all, thanks for your post. I don't think your code is wrong or bad in any way.

> The thing is, you're not only complicating your code by introducing monoids, you're unnecessarily restricting yourself.

Strictly speaking? I agree. All we need for this problem is semigroups. We never explicitly need a "monoid." But I'm not sure offering the "mempty" case is really over-complicating the code, it just bloats the rhetoric a wee bit.

But I think what you're referring to is the conceptual overhead of Monoids? I'll get to that in a sec.

> The only thing you need is higher-order functions to be sufficiently general:

    > code follows
For "sufficiently" general code, it certainly seems to be less general than a piece of code relying on monoid or semigroup operators. You've hardwired yourself to strings and arrays.

F# actually is doing a natural transformation for you from (Maybe~>Maybe String) implicitly in there, so you're really relying on the fact that F#'s got some monoid logic baked in.

> And you can use other patterns instead of "every n-th entry" by using generators other than `every`.

Right, the essence of fizzbuzz is in the list of partial applications of every onto a list of prime factors and strings. Everything else is repetitive joining logic. Which is why appealing to a pattern that captures nearly all that logic seems like a good idea, no?

> This approach can make joining O(n) as opposed to O(n^2) (naive concatenation) or O(n*log(n)) (ropes) worst case and is more general.

I do not see how it's more general unless you demand non-associativity which—while fair to do in an interview question—is quite unheard of in the programming world. In terms of runtime performance, rope concatenation is constant and a full retrieval is O(n + log n), not O(n log n), so I'm not sure what you're getting here. But it's worth noting this code (like a lot of F# code) tends to pin onto exactly one data structure and can't simply imply the data structure from calling types.

> Works for all monoids, too, without mentioning the word.

There's an implicit value judgement here: that not naming this pattern is better. You mentioned it above with the "you complicate your code by introducing monoids." I'm curious why the default mode is that I need to defend naming a pattern that obviously saves code here, and captures more use cases. I don't mean this to be a contentious discussion, I just want to know why you think it's inherently bad.


Less? Even ignoring the glut of imports, it's significantly longer than the straightforward and more extensible attempt where you just chain a few `if`s. (The article did strawman that attempt, but it was transparent enough about it.)


Which if my examples is it longer than? It's been awhile since I wrote this so...


None of the examples you specifically gave. As I said, it strawman'd the easy method.


I don't know what the "easy" method you're referring to is. I did include an else-if chain and did my best to not inflate it.

Even though it was 5 years ago I need to revisit this blog to update the pronouns anyways, so please do correct me.


    loop i = 0..100 {
        out = ""
        if i % 3 == 0 { out += "Fizz" }
        if i % 5 == 0 { out += "Fizz" }
        if i % 7 == 0 { out += "Fizz" }
        if out == "" { out = str(i) }
        display out
    }


Is this really much shorter than my Haskell example? If we count the import decls it seems to be a bit of a wash.


It's obvious and readable, and therefore maintainable, which is one of the most important parts of software engineering.

Again, as an example for exploring the concepts, this is a great article, as long as the overengineering isn't taken as a good thing in itself.


It's about 0.7x the length of the final Haskell version, ignoring indentation and imports, but I don't really care about golfing. My point was simply that your statement that you "write less of [your] own code" was not justified.


> It's about 0.7x the length of the final Haskell version,

The final haskell version with the explicit "show" does 2 things your example here does not: offer parameterized final case, and use a rope data structure for the strings.

But in a very real sense your code is a special case of the haskell code c_wraith offered. You're using a builtin monoid (strings) and explicit control flow to do it, so your code can't be reused in other contexts.

If I tactically asked you for a feature that made the output = '' in some cases, you'd have to refactor pretty significantly. The haskell version not only doesn't have that weakness, but it can even be used to do more abstract things like count the number of times Bar _would_ be printed if we wanted to do that.

> but I don't really care about golfing

Okay, but for someone that's indifferent you are sorta gut punching c_wraith's code by penalizing haskell for explicit modules and imports for libraries, writing code in a style where I can explain it line-by-line, and comparing a simplified version to my most complicated version.

> My point was simply that your statement that you "write less of [your] own code" was not justified.

It's a debate I'm willing to have, but I think my blog post defends this point. That code lacks any sense of abstraction over very common patterns, and it ultimately ends up appealing to a builtin monoid and its operations to accomplish the same thing.

It's fizzbuzz, so it's hardly a crisis. But the point of this entire discussion here is about how even in these small cases we can see big abstractions appear and be relevant. I'd rather add 4 import lines and lean on code tested till the cows come home than pretend I can't ever have a bad day writing bespoke logic.

Edit: it occurs to me that implicitly in this assumption is the notion that we must defend using simple, wide-reaching abstractions. Can you give a good reason why something as trivial as "monoids" isn't exactly the sort of thing we should all be using, as opposed to bespoke logic that on a good day conforms to this pattern? Why _wouldn't_ we?


> offer parameterized final case

The raw code is more parameterizable than the list, since it is customized with the extremely powerful language "pseudocode". It does so with no loss of concision and is no less abstracted.

> use a rope data structure for the strings

It's pseudocode. Change the first line of the loop to `out = rope()` if you care.

> you are sorta gut punching c_wraith's code by penalizing haskell for explicit modules and imports for libraries

Given I have explicitly excluded this every single time I mention it, I don't know why you are saying this.


> The raw code is more parameterizable than the list, since it is customized with the extremely powerful language "pseudocode". It does so with no loss of concision and is no less abstracted.

I can't tell if you're joking or not here. But let's pretend you're not.

> Given I have explicitly excluded this every single time I mention it, I don't know why you are saying this.

Sorry, I thought you were comparing WITH imports to the proper version. In that case the apples-to-apples version comparison is 0.625x (5/8 or 4/7) between your pseudocode and real world code with abstractions.

I'm... pretty happy with that! Even the version that uses ropes, can be reused in many contexts, and relies on external and highly tested code is only 1 line longer in all cases.

Thank you, I hadn't really considered psuedocode as a valid competitor but now I realize it makes my blog post look a lot better.


> I can't tell if you're joking or not here.

100% serious. I don't see how you can seriously consider the version that is way more complex and much less flexible "better".

> the apples-to-apples version comparison is 0.625x (5/8 or 4/7) between your pseudocode and real world code with abstractions

Which code samples are you actually comparing? Because no matter how I try to slice it, the pseudocode I gave is shorter.


No where near compact (or possibly obtuse?) enough for a Haskell version, surely? Here's one I found elsewhere on the net sometime ago:

    (~>) :: (Integral a) => a -> String -> a -> Maybe String
    (n ~> str) i = str <$ guard (i `mod` n == 0)

    fizzbuzz = 3~>"Fizz" <> 5~>"Buzz"

    main = mapM_ putStrLn $ catMaybes $ fizzbuzz <$> [1..100]
At least it generalises nicely to arbitrary fizzbuzz numbers! (edit: as noted below this version as written fails to print the non fizzbuzz digits. An exercise for the reader :) )


TBF, I wrote this article 5 years ago. Monad Comprehensions were in fashion at the time.

Now every dang person wants to write an arrow with a tilde in it and exclaim, 'Naturality!' ;)

Gotta love the use of guard in this. Maybe I should revisit.

I suspect there is also a clever way to do this applicatively, but haven't really noodled with it to see if it lines up. Any chance to use applicativeDo!


    main = traverse_ putStrLn $ Compose $ fizzbuzz <$> [1..100]


I did not know about compose and this is a neat find.


I’m sure there's a way to define the main function above in an Applicative fashion, but my Haskell isn't really up to it!


    fizzbuzzbazz = mconcat
      [ 3 ~> "Fizz"
      , 5 ~> "Buzz"
      , 7 ~> "Bazz"
      ]
Also, I believe you should have something like

    main = mapM_ putStrLn $ map (fromMaybe . show <*> fizzbuzz) [1..100]
to print out the number when it isn't matched by any of the rules.


Point! I did have something like that in the original version, but was experimenting with other approaches and lost the digits somewhere along the way.


For those who don't speak Haskell, its worth noting that "<>" is the monoid concatenation operator. The "~>" operator returns a function type "a -> Maybe String". A function type is a monoid if its return type is a monoid, so this works.


  from functools import partial  

  def rem(i, n, message):
    return message if not (i % n) else ''

  [(''.join([
     partial(rem, message='fizz', n=3)(i),
     partial(rem, message='buzz', n=5)(i)
     ]) or i) for i in range(1,101)]

vs

  {-# LANGUAGE MonadComprehensions #-}

  module Main where
  import Data.Monoid (mappend)
  import Data.Maybe (fromMaybe, listToMaybe, maybe)
  import System.Environment (getArgs)

  fizzbuzz i = fromMaybe (show i) $ mappend ["fizz" | i `rem` 3 == 0]
                                            ["buzz" | i `rem` 5 == 0]

  -- mapM_ is our iterator, putStrLn writes to console.
  main = mapM_ putStrLn [ fizzbuzz i | i <- [1..100] ]


As the author of the article, I feel like you've missed the point of this article if you're writing a "vs" post.

The important point here is that monoids make this easier and less full of unique logic. The code you've posted here is clever and thoughtful, but still commits to the same "we don't abstract over monoids" problem I identified. You saved 4 lines over other versions and probably lost clarity in your quest for that.

It's worth noting that you are leveraging a built monoid: strings. '' = mzero and join is your + operation.


Sorry, it is very early in the morning so I'm still waking up. I didn't mean anything by the groggy 'vs' other than a rosetta syntax comparison to Python.

With that being said, I don't think my snippet lost clarity and is functionally (har har) equivalent to the Haskell.

I believe where Haskell would shine is if the mappend operated on a more complex type than a regular string, but honestly I've only started reading learn you a Haskell two days ago so maybe I'm missing something deeper?

Edit: I guess you could say Python strings are monoids and maybe works on ''.


Fair enough. Sorry, last time this was posted here we had a lot of those.


I think the responses you get here is largely due to the "language jumping" which instantly turns it into a language comparison.

If you'd stuck to one language, and implemented the "machinery" in that, it'd perhaps be clearer that the point wasn't C/Python/Ruby vs. Haskell.

E.g. here's a generic version in Ruby that to keep it simple just treats an Array as Maybe, with the empty Array as Nothing, giving us mappend and fromMaybe (we could make this more convoluted with a lazy "mappend" too, but it'd make the code even more un-Rubyish) like these:

    def mappend(*args) args.map(&:call).compact; end
    def fromMaybe(default); [yield].map{|m| m.empty? ? default : m}.first; end
And so fizzbuzz becomes:

    def fizzbuzz(i)
      fromMaybe([i]) {
        mappend(
          -> { i % 3 == 0 ? "Fizz" : nil },
          -> { i % 5 == 0 ? "Buzz" : nil })
      }
    end

    (1..100).each {|i| puts(fizzbuzz(i).join) }
And just as clear and extensible as the Haskell version.

I think the issue with using this as an example is compounded by making it one where you can make the lambda's generic over a simple hash lookup, as well, which makes it easy to overlook that the above can also handle more divergent rules (e.g. ones not relying on remainder, for example)


I took a course that focused on Haskell in college. I enjoyed it. I love reading articles about how neat Haskell is. I've still never been convinced that I should use Haskell over a more mundane language for any actual project.


Depends on what the project is. If parsing is the biggest part of the project then I would say Haskell is top-class in that respect. It has proven success stories. However, due to the lack of control over GC it has latency issues with certain kinds of work and it's not a great choice for writing demanding games.

For me, one of the big holes in the Haskell story is a great, idiomatic GUI toolkit. That makes it a little more difficult to use for standard desktop applications, but not impossible (after all, there are binding to common frameworks like GTK etc).


On the GUI toolkit issue, I'd recommend combining gtk2hs (which is a thin wrapper around GTK) with Reactive Banana. GTK signals are isomorphic with Banana events and GTK attributes are isomorphic with Banana behaviors. So you can connect up a button press signal from GTK to an Event, transform that event with data from some other behavior, and hence generate a new behavior which is connected to the GTK attribute of the text displayed in a text box. Your application logic is then expressed as the plumbing that pipes data from event sources to event and behavior sinks. This works well because the event plumbing is composable in ways that callbacks and state variables of native GTK aren't.


Thanks for the advice! I'll look into that


Have to agree with you on all counts. I forgot about parsing, I generally don't have to do that. The final project my group attempted was a simple game/simulator in Haskell and it was by all counts a pain to work around the statelessness. When we were trying to create an interface for the game we also noticed the lack of good GUI libraries.


I'd extend parsing to any part of a compiler. Or a whole compiler. It's really nice for managing that complexity.


I do pretty ordinary stuff and it makes my work a lot nicer. An open source example is the Elasticsearch client I wrote called Bloodhound: https://github.com/bitemyapp/bloodhound

It makes writing and maintaining applications that use ES a lot more maintainable. Makes the Query API a lot more discoverable too.


Just for the fun of it, I tried going the complicated way in Javascript:

```

var createRange = n => Array.from(Array(n), (_, i) => i + 1);

var rem = (i, n, message) => (i % n ? '' : message);

var accumulator = (a, b) => a + b;

var insideReducer = (p, n) => accumulator(p,rem.apply(null,n));

var outsideReducer = (p, n, i) => n ? p + n: p + (i + 1);

var messageFilter = (n, filterArray) => createRange(n)

    .map(i => filterArray.map((el)=>[i,...el])

        .reduce(insideReducer,"")
                       
    )

    .reduce(outsideReducer,"")
var fizzbuzz = n => messageFilter(n, [[3, 'fizz'],[5, 'buzz']])

```


Just pass in a associative array with numbers 3 and 5 for keys and what to say if mod that key value == 0, extendable, done, clock out.


Associative arrays are monoids. Just saying.


There's some interesting Perl 6 implementations here[1] (and in the discussion of that), which shows some interesting capabilities of multiple dispatch, and farther down using subtypes along with multiple dispatch.

I particularly like this terse one:

  sub fizzbuzz($n, $_ = [$n % 3, $n % 5]) {
      when [0, 0] { "fizzbuzz" }
      when [0, *] { "fizz" }
      when [*, 0] { "buzz" }
      default { $n }
  }
That's fairly tight, while remaining somewhat readable. There's a version that adds a line or two by moving $_ into the body of the sub as well in case that's more readable to some.

1: https://gist.github.com/masak/b9371625ad85cfe0faba


Nice, reminds me of the equivalent OCaml implementation:

    let fizzbuzz n =
      match n mod 3, n mod 5 with
      | 0, 0 -> "fizzbuzz"
      | 0, _ -> "fizz"
      | _, 0 -> "buzz"
      | _ -> string_of_int n


This praised Ruby code in article is crap. It abuses side effects(unnecessary 3 invokes of printing) and needs to resort to using helper (@printed = "Fizz"). I would never let something like this through code review.

Like you can't simply first generate data and then print it...


For a high pressure situation like an interview, I think it's quite good code to produce. Post fact we could separate concerns.


We can go a little shorter with C :)

    main(i){for(;i<101;puts(printf(i%3?"%d\r":"fizz",i)&&i++%5?"":"buzz"));}


Ignoring that this only works if "\r" works, here's a shorter Ruby version without even trying hard:

(1..100).each{|i|e=[["fizz"][i%3],["buzz"][i%5]].join;puts(e[0]?e:i)}

(array lookup with out-of-range offset returns nil, Array#join produces empty strings for nil elements, and accessing element 0 in a zero length string returns nil.


How does the divisible-by-five case suppress the numeral output, when `i%3` is nonzero?


The "\r" in the format string for the number is carriage return. It ensures that as long as you print to terminal rather than redirect to file etc., the number gets overwritten if anything else gets printed on the same line.


Ah, thank you.


You're hired.


The nightmare is real.


Thug life


Monoid is a concept that is older than category theory. And way simpler. I don't get why people always say category theory when they mean basic algebra. Maybe they don't know better (lack of math background, expository texts that go straight to "category theory")?

Anyone an idea on why that's happening?


Introduction to category theory via haskell, and not learning the right areas of abstract algebra?

It's plausible to even do a degree in mathematics and never encounter the name 'monoid' - you'll obviously encounter the structure, but my abstract algebra classes just skipped ahead to groups, and then to rings and modules, instead of spending any time on the simpler structures.

Other times, like when studying formal languages, because regular languages are a monoid, the structure is introduced, via noting that you have an empty language and a method to combine any two languages, but not given the name monoid.

So you come across haskell, and it gives a name to things like monoids, monads, functors, and so on, and says that monads and functors come from category theory, and it seems like a valid inference to say that monoids also come from category theory.


Haskellers tend to think category theory is far more useful than it is, and non-Haskellers the complete opposite. On a forum of predominantly non-Haskellers I'm happy for them to get a bit of a overdose of the terminology.


I guess adding the discovery year of monoids to a blog post is not associative.


Translation to Python:

    for i in range(100):
      print((("fizz" if i % 3 == 0 else "") +
             ("buzz" if i % 5 == 0 else ""))
             or str(i))
The "+" makes it a monoid.


Or you can underthink it. It's side-effect free, takes no arguments, and is terminating, so it can be optimized perfectly for speed:

    #!/bin/cat
    1
    2
    fizz
    4
    buzz
    6

    98
    fizz
    buzz


That prints #!/bin/cat at the beginning, though.

If I can return to overthinking it, you could use a silly tool[1] I wrote ages ago and do something like (untested)

    #! /usr/bin/cmd tail -n +2
    1
    2
    fizz
    ...
[1] http://reasonableapproximation.net/2010/07/27/silly-things-t...


Why is FizzBuzz a separate branch? To produce the desired output, all you have to do is print Fizz and Buzz in the correct order.


tl;dr: concatenation of strings corresponds to concatenation of "optional strings"; monoid homomorphism!


No discussion of overthinking FizzBuzz is complete without a shoutout to the glorious EnterpriseFizzBuzz: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


My favorite fizzbuzz-related story will aways be http://joelgrus.com/2016/05/23/fizz-buzz-in-tensorflow/

Similar to this story, a ridiculously overengineered fizzbuzz using machine learning...


My effort (I got lucky):

https://play.golang.org/p/Q7Ek05QjOP


Let the nerd snipe Olympics, BEGIN!




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: