Hacker News new | comments | show | ask | jobs | submit login
Snap: A Haskell Web Framework (snapframework.com)
171 points by tomh- 2761 days ago | hide | past | web | favorite | 64 comments



I don't write Haskell, but the documentation on that page almost made me wish I did.

Other projects, take notes.


This kind of documentation is rare for Haskell projects, many other attempts failed silently gaining traction because documentation writing is usually the last item on the todo list for haskell projects, really glad this one took such an effort for it!


> documentation writing is usually the last item on the todo list for haskell projects

That's unfair. Documentation writing is usually the last item for software developers. Full stop.


Yes you are right, it was unfair to say that its because it is the last todo item for Haskell projects. However I feel that it is even more important for Haskell projects to actually do that last item. I've seen several webframework projects like hApps, Happstack, turbinado which are probably great pieces of software but hard to get started on. Personally for me it is a lot harder to start doing Haskell web development coming from a background of MVC based web development. Even though I know Haskell better than python or ruby, it is easier for me to pick up python or ruby frameworks because the way I think in writing web applications (which is mainly imperative). This is where great documentation comes into play. Snap framework seems to offer this transition easier, so will definitely take a peak at it!


I was trying to get going with yesod earlier this evening, and all props to Michael Snoyman, but seeing the Snap Web site makes me far more motivated to try it, and more confidence in what it does.

But, as dons pointed out, crappy docs is not limited to Haskell by any means.

If you want to see your project gain traction it has to be stupid easy for people to jump in and start playing, and solid Web site with serious docs is a big win there (as are project generators and copious examples).


I thought much the same thing. Writing good documentation is hard, and the effort that must have gone into this site is impressive. My own project documentation is admittedly nowhere near this.


This looks great. If you are a non-Haskeller or only a semi-Haskeller and are wondering what technical features set this apart from other existing frameworks:

The Snap framework uses a style of I/O called “iteratee I/O”. We have opted for iteratees over handle-based and lazy I/O because iteratee I/O offers better resource management, error reporting, and composability.

If you're interested, you can read some in-depth stuff here: http://okmij.org/ftp/Streams.html (these papers are somewhat famous)

The Snap tutorial has a decent summary of how they work, but it doesn't provide a lot of context for why.

If you are already familiar with functional programming, then you know about foldl, foldr, and left/right recursion. When processing a collection of some kind (like a list), usually you take an enumerator (the fold function or a map function, for example) and pass it an iteratee (the function that will be applied to each element) and then your work gets done and you get a result. Left fold is non-lazy, right fold and map are lazy.

Because Haskell is a lazy language, there is another way to process lists lazily: with explicit recursion to the right. If you are familiar with Scheme or Lisp, it would look kind of like

    (define (myfunc xs)
      (cond ((null? xs) '())
             (else (cons (+ (car xs) 1)
                         (myfunc (cdr xs))))))
which would add 1 to each element of a list, returning a new list.

In Lisp or a non-lazy Scheme, this would blow up in your face for a large list. In Haskell it's no sweat, because the list will only be processed as it is needed, and generally for control structures you have to go out of your way to make it 'space leak' (build up a bunch of unevaluated expressions and then explode when you finally evaluate it.)

If you use foldr or map in Haskell, it works like this. One interesting thing about using non-explicit recursion in Haskell is that if you put a bunch of maps or folds (or with the Stream Fusion library, many different types of list operations) next to each other, the compiler will inline them together and eliminate intermediate list allocations. So you can end up writing your process as a series of discrete logical steps (from list type a, to b, to c, to d) but still get the performance of doing only one traversal. Very nice.

One of the downsides is that you end up with a lazy control structure, which can be problematic when dealing with the real world. For example, when reading from a file handle, someone could move the file or delete it or something. With lazy IO, the programmer is not really controlling when the file will be read by the program. Often, this is not a problem. However, with networking and time-critical applications, it can be a big problem.

Another problem is that if you need to perform allocations (like going from a list of one length to a list of another length) in the middle of your transformations, it gets really tricky if you want to maintain fusion. Normally with explicit right recursion you can just cons together a larger list on the fly, but we can't do fusion on explicit recursion, and compilers are generally bad about figuring out what kinds of allocation our explicitly recursing function does.

We don't want to give up our elegant way of expressing operations of sequences, but we want to know when we are writing and reading from the real world, and we want to do it with determinate constant space. Iteratees let us do that almost as elegantly as the normal list stuff by driving the consuming of the list via the functions that do the actual processing. Sort of like a list unfold, but not quite.

Check out attoparsec-iteratee http://hackage.haskell.org/package/attoparsec-iteratee for an example of iteratee usage: it turns a fast bytestring parser into one that's based on iteratees, which will probably end up being even faster. (correction: which will allow you to process streams in constant space without needing to explicitly manage things.)

(If Dons or Gregory Collins is around to correct the numerous mistakes I'm sure I made in this post, please do so!)


> Check out attoparsec-iteratee http://hackage.haskell.org/package/attoparsec-iteratee for an example of iteratee usage: it turns a fast bytestring parser into one that's based on iteratees, which will probably end up being even faster.

Iteratees don't really make things "faster" (you end up with a sequence of imperative read()/write() calls just like everything else), but they allow you to stream in O(1) space while still being easily composable.

For example, we do gzip, buffering, & chunked transfer encoding using simple function calls, and we don't really have to worry about explicit buffer management or control flow while we're doing it.


Fixed, thanks!



The example for the echohandler is wrong, to solve, replace

    req <- getRequest
    writeBS $ maybe "" id (getParam "s" req)
with

    param <- getParam "s"
    writeBS $ maybe "" id param


havn't yet looked at Haskell much, but I had a look at the quick start and it's really well written. The website is one of the cleanest I've seen for a framework in a while, well done on the design!


I'd love to see mustache http://mustache.github.com/ ported over to Haskell now. (Maybe this would make a good pet project.)


I started working on something somewhat similar to mustache for haskell a while back, though its still in the early alpha stage of development. http://github.com/jamessanders/rstemplate


What are the benefits of mustache?


Amazing documentation right from the start. Kudos to all the members of the project!


I had a look at the benchmarks. Could someone confirm -- larger values (bars) means better. So RoR is the worst, and Snap the best on both benchmarks.


Yes, it's the number of requests handled. More requests served is better.


What was the deployment stack like for Rails? The numbers are way off.


I downloaded Rails, followed a few online tutorials to build the app, and ran script/server to run the server. As mentioned in the benchmarking document, we're not experts on Rails deployment and will gladly accept suggestions for improvements that can be implemented in a few minutes.


You are running the benchmark with WEBrick, a slow, development only web server. For Rails you should run nginx + Passenger.


For Rails you should run nginx + Passenger

No, that's for a production website and requires setup. They just need to replace webrick with thin.

> gem install thin

then

> thin start

instead of

> ./script/server

Simple.


I ran the benchmark with the thin server using the following httperf command:

httperf --hog --num-conns 1000 --num-calls 1000 --burst-length 20 --port 3000 --rate 1000 --uri=/pong

The best I got was 258 requests / sec and 13.5 average responses / sec. The chart on the website plots responses / sec. So thin seems to perform worse. I'm guessing that we're running into the limit of 1024 file descriptors. I didn't test with the libev version of Snap, so my testing had that limit too, but my guess is that Snap is faster and manages to stave off the limit long enough to serve a reasonable number of requests. But I don't know for sure.


Um...shouldn't it also be:

script/server -e production

Running in development mode severely hampers Rails' performance.


Good point. I also noticed they're running snap with 4 threads. Since rails is multi-process rather than multi-thread they would actually need to set up something like passenger as acangiano suggested.

Better would be to just limit the test to single-process/single-thread. The idea should be to compare web frameworks not web servers.


Yes, but it's also useful to compare similar levels of effort. Using 4 threads with Snap is a similar level of effort to using thin. Setting up 4 separate processes with passenger requires quite a bit more effort, especially if you haven't done it before.


You can set up 4 processes with thin, just do:

> thin -e production -s 4 start

The problem is that each process listens on its own port. They can't all listen on port 3000, so you need some sort of proxy to parcel out the requests.


I did use -e production, just didn't remember to mention it in the post above. I'll try thin though.


Agreed...I'm surprised Rails is SOOOO far behind everyone else.


"An XML-based templating system for generating HTML" That turned me off immediately.


"To install Snap, simply use cabal. It’s up to you whether or not you want to use the Heist templating library. It is not required if you just want to use the rest of Snap."

But I haven't seen what other options are in place for simple templating. Presumably you could use Hamlet or something, if you wire it in.

It should include at least some basic string interpolation stuff so you can so simple output in a simple way.


Sorry you feel that way, at least it's optional. The idea is that we are interesting in doing DOM-level transformations bound to html tags and there just isn't a robust HTML5 DOM parser available for Haskell.


What would you suggest for "populate placeholders in a string and send it out" page rendering?



Thanks!


http://www.alsonkemp.com/programming/a-haml-parser-for-haske... is young (but not) but looks (pretty) hype.


What don't you like about XML-based templating?


i am not sure if it was supposed to be like this, but all i get when i visit the url, is a list of random scribd links. And the links have nothing to do with Haskell.


That makes no sense.


this is what i get (i just did a screengrab): http://twitpic.com/1px4b8

I am confused as well.


What do you get when you type "dig www.snapframework.com" into Terminal? I get

    ;; QUESTION SECTION:
    ;www.snapframework.com.         IN      A

    ;; ANSWER SECTION:
    www.snapframework.com.  3600    IN      CNAME   herod.hosts.coptix.com.
    herod.hosts.coptix.com. 86400   IN      A       64.203.102.17

    ;; AUTHORITY SECTION:
    coptix.com.             108393  IN      NS      a.ns.coptix.com.
    coptix.com.             108393  IN      NS      b.ns.coptix.com.

    ;; ADDITIONAL SECTION:
    a.ns.coptix.com.        194793  IN      A       217.160.248.220
    b.ns.coptix.com.        205469  IN      A       64.203.102.2


i get:

;; QUESTION SECTION: ;www.snapframework.com. IN A

;; ANSWER SECTION: www.snapframework.com. 3600 IN CNAME herod.hosts.coptix.com. herod.hosts.coptix.com. 86400 IN A 64.203.102.17

but it works fine now, after a couple of hours i was able to access the snapframework.com

still have no idea what it was - never happened before :/


You're searching with google. DNS issue, and browser confused?


I noticed some type of virus recently would send you to different links instead of the ones you were searching on Google. Maybe you have some kind of extension on that?


No, I did not search with google. I clicked directly on the link provided here.


The result is a google search page.


That's not a regular Google search page - possibly a custom search engine, but regular Google looks very different.


[deleted]


[deleted]


[deleted]


"its ust that putting typesignatures in comments and checking them when i write new functions has done me fine in Ruby"

if you are writing type signatures anyway why not also have (a language in which) the compiler enforces them?

I understand using a dynamic typed languages and not having to write type signatures. I understand using a static typed language and having to write type signatures (or having the compiler infer them) and having the compiler ensure type safety.

Your approach (writing type signatures in comments and manually type checking them) seems to combine the worst of both worlds? I don't imagine many people can do type checking/inference in their heads for large code bases.

Not snark. Genuinely curious as to why someone would do something like this.


When I've tried to explain Haskell to Ruby folks, I mention that at some point you are likely to be writing unit tests that amount to type checks. Maybe not always, or maybe not explicitly, but the value/effort ratio of writing a type signature in Haskell is at least as good as the corresponding test code you would write in Ruby. And you don't even have to write the type signature all the time, but you still get the type checking benefits.


As a Ruby folk, I somewhat agree. Unit testing is essential in a dynamic language otherwise things are going to blow up.

However, this doesn't really blow up the number of test cases. If you're unit testing dynamically typed languages versus unit testing statically types languages, there is negligible number of test cases that would exist in Ruby for the purposes of type checking (over said statically typed language; there are type checking cases for when you're switching on type, obviously).

In fact, I find that writing tests in a dynamic language becomes easier over time because of the abstractions that are easier to generate in dynamically typed languages. Some of those abstractions may be possible in statically type languages, but it's significantly more difficult.

Disclaimer: I've never written tests (only 'toy' code) in Haskell.


"However, this doesn't really blow up the number of test cases."

Sure. But what I tell non-static-typing people is that the effort to write the type sig is arguably less than any unit test you might write, and the typing eliminates at least some likely tests you might otherwise have to write.

Now, Haskell's type system introduces other concerns, and I haven't written enough to say how, overall, that compares to developing in Ruby.

", I find that writing tests in a dynamic language becomes easier over time because of the abstractions that are easier to generate in dynamically typed languages. Some of those abstractions may be possible in statically type languages, but it's significantly more difficult."

I ported some Haskell code to Ruby and found that there were a number of powerful high-level abstractions in Haskell that were damn hard for me to replicate in Ruby.

It may be an apples-to-oranges comparison, plus I need more Haskell experience to get a better feel for things, but so far I don't see Haskell as being in any way less capable in generating abstractions.

The issue may be that the two languages just offer different realms of abstractions, and trying to do Ruby-style abstractions in Haskell is as prone to pain as Haskell-style abstractions in Ruby.


Now, if only I can wrap my head around freaking monads.


Monads are pretty simple:

Think of a monad as a spacesuite full of nuclear waste in the ocean next to a container of apples. now, you can't put oranges in the space suite or the nucelar waste falls in the ocean, but the apples are carried around anyway, and you just take what you need. (http://koweycode.blogspot.com/2007/01/think-of-monad.html)

That explanation is pretty good if you understand monads. Otherwise it is completely useless.


I must not understand monads. I thought I knew the basics of how to use them, and have used them in the little bit of Haskell I have written, but that makes no sense at all to me.

What are the apples and oranges supposed to represent? Why would anyone want to put oranges in the spacesuit? When "you just take what you need", who is the "you", and are they swimming around in the ocean? Is the ocean significant at all?

I might be wasting my time trying to understand this analogy. I'm not up to speed on terms Haskellers throw around such as fields, rings, category theory, etc. Haskellers are welcoming of newbies and eager to teach, but are on a vastly different level than the average developer causing some of us to feel we're not smart enough to wield Haskell. Unfortunately even some basic concepts are so abstract that trying to explain them in concrete terms often leads to gibberish such as dons' analogy there.

I realize dons probably wrote that for the monad-savvy crowd as kind of an inside joke, perhaps I'd do well to just move on.


So you have to understand monads in order to understand the explanation that helps you understand monads? Heh...


The best way to understand monads is to stop reading monad tutorials and write some code. There is only one particular trick to learning monads: use the >>= operator rather than do notation until you understand the basics. Then read this explanation of do notation:

http://en.wikibooks.org/wiki/Haskell/do_Notation

Actually, this is just a special case of a general rule: "The best way to understand {{programming_concept}} is to stop reading {{programming_concept}} tutorials and write some code."


I think your general rule is only for those who learn this way. I find that a lot of programmers learn this way but not all...so this general rule isn't universal ;)


My current understanding of monads is that they are just a formalisation and a generalisation of an execution (or composition) mechanism that is implicit in most languages.

For example, in a language like C, the active part of the program is a composition of statements that are "executed" one by one and in order. The IO monad in effect implements this strategy.

The big thing is that the monad abstraction allows for other composition strategies too. The Maybe monad for example could be compared to a language that keeps executing things one by one until one of the computations fails, which results in the entire computation yielding a null result.

The type system helps keep these different strategies separate from each other, so that eg. one can't execute in parallel operations that require sequential composition.


Here's my explanation from a while back, if it's any help:

http://news.ycombinator.com/item?id=1202002


This is one of my favourite explanations: http://blog.sigfpe.com/2007/04/trivial-monad.html


jQuery is a monad. It's a giant scope that contains information and subroutines.


Everything is a monad.


wait what? I'm confused :(


He was just fooling, I think. There are many examples of algebraic structures that are definitely not monads.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: