
Using Haskell to Find Unused Spring MVC Code - petercrona
https://tech.small-improvements.com/2016/11/01/using-haskell-to-find-unused-spring-mvc-code/
======
evincarofautumn
> Putting too many functions that are relatively complex in the where clause
> is a bad idea, because you lose the explicit type signature…

Note that it’s possible to give type signatures for definitions in “where”
clauses:

    
    
        twice :: Int -> Int
        twice x = two * x
          where
            two :: Int
            two = 2
    

However, you may need the ScopedTypeVariables extension[1] in order to be able
to express the correct type signature. (This extension will probably be
standardised at some point.)

The biggest real problem I’ve found with large “where” clauses is that they
tend to have a lot of implicit dependencies on variables from the enclosing
scope. Sometimes you want that for convenience, readability, or performance,
but lifting local definitions out into top-level definitions can also help
make them more explicit and reusable.

[1]: [https://ocharles.org.uk/blog/guest-
posts/2014-12-20-scoped-t...](https://ocharles.org.uk/blog/guest-
posts/2014-12-20-scoped-type-variables.html)

~~~
clusmore
>The biggest real problem I’ve found with large “where” clauses is that they
tend to have a lot of implicit dependencies on variables from the enclosing
scope.

For me it's always a toss up. I like to make things local where possible to
give some indication that the definitions need not be looked over with a fine-
toothed comb -- they aren't used extensively. On the other hand, it does often
lead to implicit dependencies. It would be nice to have scope highlighting -
color every identifier based on the distance from the scope it's defined in.
This idea shamelessly stolen from Doug Crockford in
[https://www.youtube.com/watch?v=b0EF0VTs9Dc](https://www.youtube.com/watch?v=b0EF0VTs9Dc)

~~~
chriswarbo
Dr Racket has a nice feature where it draws arrows between a variable and its
binding site, and between a binding and its use sites; it would be nice to use
a similar thing in Haskell.

[https://docs.racket-
lang.org/drracket/buttons.html#%28idx._%...](https://docs.racket-
lang.org/drracket/buttons.html#%28idx._%28gentag._9._%28lib._scribblings%2Fdrracket%2Fdrracket..scrbl%29%29%29)

~~~
andromeduck
I think something like eclipse's definition window would do a better job.

------
MaxGabriel
> Putting too many functions that are relatively complex in the where clause
> is a bad idea, because you lose the explicit type signature (you should
> always specify it for top-level functions).

You can give functions in the where clause type signatures. It isn't as
common, but it might be preferable to breaking out code into separate
functions, depending on how well the function makes sense outside the context
of its "parent" function.

------
durak
Great post, I remember when I first picked up a functional language (it was
OCaml for me) I was perplexed by the sheer number of concepts that were all
new to me. Currying, tail recursion, higher order functions, combinators,
functors, lenses, the list goes on and on. These concepts can be translated
outside of the functional realm to a certain extent but the way that many of
these functional languages embrace these ideas as part of the paradigm was
refreshing to say the least.

Since you were collecting comments/suggestions: I urge you to take a look at
the zipper data structure if you haven't already, they are applicable to what
you are working on and are a good intellectual exercise on their own :) In
addition, I haven't had the time to go over all of your code but I don't see
you mentioned circular dependencies for your tool, is that a feature that is
yet to be implemented? In fact this project can probably benefit from drawing
ideas from garbage collection techniques since conceptually I think they are
very similar.

------
danidiaz
> I had a quick look at Hackage and noticed that someone already has written a
> parser for Java in Haskell

The author of "corrode"[1] wrote it in Haskell because there already was a C
parser available. As always, the importance of libraries.

[1]: [http://jamey.thesharps.us/2016/10/corrode-update-support-
fro...](http://jamey.thesharps.us/2016/10/corrode-update-support-from-mozilla-
and.html)

------
paulsutter
Cool project. I'm a longtime Haskell skeptic but this article does a great job
explaining Haskell's advantages in this useful and otherwise challenging
example.

------
lmm
I've done similar things using Scala, which has good parser combinator support
and interoperates very nicely with a Java stack.

------
m_mueller
> For example in my case I ran into problem when reading all files lazily.
> This caused my program to have too many open file handles. It was easily
> solved though, by hacking a bit to force the complete file to be read
> directly

This kind of thing is exactly what I'm afraid of and makes me not wanting to
commit to a purely functional style. IMO FP is simply the wrong tool for
handling expensive resources - e.g. I/O or large memory regions. This is why I
think an imperative shell handling these resources, around a functional core
(potentially around another tiny imperative innermost core for handling
caches), is overall a cleaner approach for performant code.

~~~
wereHamster
Laziness (lazy IO, in this case) has nothing to do with a language being
functional or not. It has everything to do with a language being strict or
lazy. There are strict functional languages. Most are, in fact.

~~~
m_mueller
Lazy/strict is just one property that can be problematic in a high performance
context. The biggest one to me is mutability. Basically it's a simple test:
Can I swap pointers? (e.g. 'model_current' and 'model_next_timestep'). If not,
I can't use it, it would slow down numerical solvers tremendously to allocate
and free the required memory for each step. However, if I can just have the
time iteration in an imperative shell that allows pointer swapping, while
keeping the interesting mathematics in a purely functional core, that would be
an interesting architecture (because it could make use of inherent parallelism
in a better way). So far I haven't seen anything like that becoming truly
competitive with Fortran/C/C++ in the HPC space, which I find a shame.

Programming for HPC should be like programming in the future, as Alan Kay
likes to say, but it seems to me there is a stark (and IMO unnecessary)
disconnect between the worlds of HPC and desktop/web programming today.

~~~
fulafel
There are various ways get the effect (with similar space and performance
chacteristic), but there are different approaches for different FP languages.
For example, persistent data structures, non-pure primitives (eg Clojure's
atoms), monads, etc. For Haskell, these sound relevant if you're set on using
strict dense matrices:
[https://wiki.haskell.org/Monad/ST](https://wiki.haskell.org/Monad/ST)
[https://hackage.haskell.org/package/bed-and-
breakfast](https://hackage.haskell.org/package/bed-and-breakfast)

I suspect HPC will always be more gnarly than elegant, since by definition HPC
is about spending money on making code go fast in very specialized apps.

~~~
m_mueller
could you expand on persistent data structures? How do they give me 'free'
memory with neither allocation nor mutable data? Or is that outside of pure
FP?

~~~
fulafel
So persistent data structures are conceptually just an extension of the
classic lisp cells: You can have a pointer to (a, b, c) and a pointer to (a,
b, c, d) simultaneously, without using twice the memory or time. This can be
extended to trees. And you can for example make an efficient tree-backed
vector, with chunks of values at the nodes. With some thought you can even get
good atomic properties so you can have safe parallel operations on the same
persistent vector.

Here's a good start about Clojure's persistent vector:
[http://hypirion.com/musings/understanding-persistent-
vector-...](http://hypirion.com/musings/understanding-persistent-vector-pt-1)

