- number of trailing zeros is equal to times you can divide something by 10
- There are a lot more 2s than 5s as the divisor, so number of 5s is the limiting factor
- 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100 contribute (at least) a factor of 5 each
- 25, 50, 75, 100 contribute another factor of 5 each
So answer is 20 + 4 zeros.
I love monoids but I personally thought that was a bad example. Not only did it complicate the problem, it didn't actually lead to any understanding that will let you write a generic and efficient solution for "n!". Something like this O(log(n)):
count = 0
divisor = 5
while divisor <= n:
count += n / divisor
divisor *= 5
The goal of the article is show there's a consistent structural framework of problem solving underneath. I mention briefly that you can improve the solution along your lines.
The idea that let's us prove your algorithm above correct using this theory is that monoid morphisms compose (they form a functor category), and that equivalence relations that respect monoid composition induces monoid morphisms to the quotient classes.
Then we just define the equivalnce relation where two numbers are equal if they have the same number of fives in their prime factorization. The composite monoid morphism yields your algorithm.
I didn't want to overcomplicate the article with this, but I did mention it (it's in the hard-to-parse paragraph at the end of the section)
A heartfelt +1 from me! :)
 I don't really mean that in a negative way, but it's like Alan Davies and Stephen Fry on QI, if you know what I mean.
EDIT: I realize that we're not on AGT/BGT or something, but you really should add a footnote about this.
For example, there's a Haskell library that defines a type called "Fold a b" . It's basically a little state machine which you feed values of type "a" until you close it and then it returns an output of type "b".
Turns out these Folds are examples of an abstraction called a "Comonad". This basically means that there is a function
> extract:: Fold a b -> b
That runs a fold without giving it any input, and a function
> duplicate :: Fold a b -> Fold a (Fold a b)
That turns a Fold that takes as and returns b into a Fold that takes as and returns another Fold that takes as and returns b. But what on earth could this be useful for?
Turns out that "duplicate" lets you keep a Fold "open" when you pass it to functions that run the Fold and only return its result (say, a function that feeds the Fold with values read from a file). If you pass a "duplicated Fold" to such functions, you will get another Fold as result that you can keep feeding.
The Applicative instance is even more useful: it lets you combine two Folds into one so that their results can be calculated in a single run.
This kind of useful discovery doesn't always happen, though. Sometimes the new functions turn out to be useless, or too inefficient.
I also want to say that I don't believe Fold was discovered from category theory, it seems to me more like a neat generalization of foldl, and then once they had an idea, they asked questions like "is this a comonad?". Indeed, it's far easier to me to figure out what's going on by reading http://hackage.haskell.org/package/foldl-1.2.1/docs/src/Cont... to see the Applicative instance itself.
> and that nothing is added by category theory itself
In some sense you're right. CT is really just a systematic way of describing patterns. Theoretically we could just prove loads of special cases and never have to resort to CT to prove anything... but it sure helps when you can just invoke the Yoneda Lemma for some special case that you're working with rather than going through the whole process of re-proving the Yoneda Lemma for the umteenth time.
(Btw, this isn't specific to CT. It's just that math-level specification/proof basically always tends towards generalization... for really good reasons.)
EDIT: I should say that I'm not yet close to the level where I can say that a "Monad is just a monoid in the category of endofunctors". I'm still just a muggle in this whole CT thing and yet it's even helping me!
For this to actually be a helpful tool, you of course need to have internalized the concept.
If you don't have a working knowledge of [insert here], you are also losing something that might give your code a better structure.
This looks like an intentional exaggeration, and I think serves no purpose. I am a math professor now, and my speciality is very algebraic, but I definitely didn't know what a monoid, or even what a homomorphism, was in my first year of undergrad. (I may not even have known what a monoid was in my first year of grad school ….)
In which case you need a name for that interface. When you have an interface that corresponds to something that's already known under a particular name, normally you'd use that name for the interface.
Absolutely, that's the Curry Howard Isomorphism.
I'm new here :)
Each time I see something like that, I think it looks great, but I'm having a hard time understanding the meaning of the vocabulary used…
Is there any good (free?) resources out there to learn Monoids, Monads, traits, category theory, in short, anything used in functional languages?
This is something that clearly stops me from using functional languages more.
This is a good paper about monads in functional programming. It reads quite similarly to a monad tutorial, but with a little more formality.
As for actual category theory, I've enjoyed the first few chapters of "Algebra: Chapter 0", by Aluffi.
If you're after more type theory, the usual recommendations are "Types and Programming Languages", by Pierce, and "Software Foundations", also by Pierce.
If you feel it's still too opaque, feel free to let me know anything specific and I will try to amend the articles with further explanation.
Categories/Functors/Monads will appear in the series in the future, as these arise as generalizations of topics discussed so far.
Has articles about Category Theory, Monads, etc... for beginners.