Hacker News new | past | comments | ask | show | jobs | submit login

I definitely find there's a strong relationship between language complexity and bikeshedding. When you have a big language like Haskell or Scala, it's easy to get distracted from solving the actual problem by trying to do it the most "proper" way possible. This is also how you end up with design astronautics in enterprise Java as well where people obsess over using every design pattern in the book instead of writing direct and concise code that's going to be maintainable.

Nowadays I have a strong preference for simple and focused languages that use a small number of patterns that can be applied to a wide range of problems. That goes a long way in avoiding the analysis paralysis problem.




Reminds me of one of Rich Hickey's talks where he states that people just love to solve puzzles, and that complex languages with, for example, demanding type systems trick people into thinking they are adding safety or value when in fact they're just outsmarting themselves.

Often when I write something in Haskell I have this feeling. It feels satisfying to build up nice types and constructs but I don't know if it at all pays off in any objective or empiric sense. I can't really tell if I've invented the problem that I just solved.


> I can't really tell if I've invented the problem that I just solved.

Priceless observation, nicely done!

I was quite inspired by Rich Hickey's Simple Made Easy talk when I listened to it a last year. I think that's the one you're referring to. Excellent food for thought in that talk.


I think the very fact that this happens in Java - a deliberately simplistic language - is proof that it's not a problem with the language itself. If the language doesn't support particular constructs, all that means is that people will bikeshed over which pattern to use instead of which language construct.


I don't disagree in general but is Haskell a big language?


Haskell 2010 is pretty small. Comparable to say Clojure, but more complex than Scheme. GHC Haskell with the kitchen sink of extensions turned on is big. Very big.

Sticking to Haskell 2010 with a few extensions that make known behavior more consistent (GADTs, NoMonomorphismRestriction, and a few others in that vein) is the best bang for buck in my experience.


The complexity of a language has nothing to do with its size.

Brainfuck or Whitespace are two minuscule languages that produce the most impenetrable sources.


That's true. I meant size in the hand-wavy sense of "this feels complicated" (although the syntax part is also true in this case). For example there's a GHC extension to overload the meaning of a type declaration (DataKinds). This technically doesn't introduce new syntax but it's definitely a "big" extension in my book.


> I'm from Microsoft and when you see Microsoft documentation it often says, you know, x y or z is a rich something, right. In this case Haskell - or ghc's version of haskell is a rich language. What does this mean? Sounds good, doesn't it? But it always means this, right. That it's a large, complex, poorly understood, ill documented thing that nobody understands completely.

From a great talk by Simon Peyton Jones https://youtu.be/re96UgMk6GQ



Haskell has a ridiculous number of obscure operators. Here's a list of "common surprising" operators in Haskell:

https://haskell-lang.org/tutorial/operators


I don't think it makes sense to characterize Haskell as "big" on this basis, because 1) it is trivial to define an operator in Haskell, so there's bound to be a lot of them and 2) even the "standard" operators typically have a simple definition (e.g. https://www.stackage.org/haddock/lts-12.9/base-4.11.1.0/src/...).


On the whole, I consider user-defined infix operators to be a huge mistake. While the few common ones are great, the ability for every single library creator to add their own infix operator turns into a mess in the long run.


They fine inside a limited domain-specific scope, just don't go importing operators from many libs willy-nilly.


It’s very hard to convince people to keep them in that limited scope.


Even if the operators themselves don't count as "part of the language", the complex precedence rules around them certainly should.


In what way? Those are defined as part of the operator. I usually parenthesize them anyway just like I would for an equation.


Those are mostly library functions, not part of the language.

It doesn't seem any more reasonable to use them to declare Haskell a complex language than it would to have the existence of, say, a linear algebra library providing mathematical operators for Forth mean that Forth is a complex language.


Depends on what library they're from.

Is the STL part of C++? Sure, it's a library, but it's a standard library - it should be there in every conforming implementation. Personally, I think of that as part of the language.

Is some vendor's RS232-port-handling library part of C++? I would say no.

In the same way, I think that Java's standard library is part of the language, and is in fact the strongest selling point of Java.


I agree that this is one of the things that makes Haskell harder/scarier to learn, but I don't think that it makes it bigger.


Those are part of the standard library rather than the language itself though. You can (and a lot of people do) define and use your own standard library instead (hmmm... maybe a rabbit hole of its own? Although it seems companies with legitimate business needs do this as well so who knows).


There are just functions.


None of these are remotely obscure unless you don't know functional programming.


Large in the sense that it admits many approaches to solving problems.

https://www.willamette.edu/~fruehr/haskell/evolution.html


It's meant as a joke but as the author of that page points out, I think there's real pedagogical value in understanding all of the examples. You'll learn quite a bit of CS fundamentals independent of Haskell itself.

Then please do as the professor does when it comes to production code.


I think the complexity in Haskell largely comes from its advanced type system and laziness. For example, pervasive use of monads in Haskell is a direct result of encoding side effects using the type system. You can't just put a log statement in a function, you have to do a whole design exercise of how to push it to the edge of the application.


Eh... Monads aren't really just about side effects. They're really just a generalization of the threading macros in Clojure (sort of, at least I view them with the same motivation as the threading macros and their cousins in Closure). As for logging, if it's really just a log statement I'll sometimes just do the equivalent of `unsafePerformIO`ing it and not worry about it. All depends on what you consider semantically meaningful behavior from the program.

On the other hand, with the rise of stuff like OpenTracing and structured logging I think the Haskell community was pretty prescient about treating logging as an explicit side effect.


I agree that monads aren't just about side effects, and lots of languages use monadic patterns. My point was that using monadic patterns is prevalent in Haskell specifically due to using the type system to track side effects such as IO. Clojure has monadic libraries like cats, that let you write Haskell style code, but they're not popular because in most cases you can solve the problem in a more direct way.


Well the monadic structure is always there, it's just a matter of whether you use it or not :).

For the most part I admire the Clojure community's focus on data and wariness of higher order abstractions. Whether it be classes, typeclasses, or higher order functions, if you can express it with just data it's almost always better and most communities would do well to remember that.

On the flip side when there is a need for higher order abstractions, I sometimes find Clojure's standard tools to be lacking. Speaking of monadic structure being there whether you use them or not, transducers are a great example. I find them overcomplicated in Clojure, which I think is due to focusing too heavily on using standard function composition to compose them. I regard this as a bit of a trick or a coincidence. When's the last time you composed something with a transducer that wasn't just another transducer as opposed to an arbitrary function? In fact if you instead use monadic composition (i.e. compose `a -> m b` with `b -> m c` to get `a -> m c`, in this case `m` is just `List`) you'll find that transducers are just functions of the form `a -> List b` rather than higher order functions. And yes that `List` remains there even though transducers work on things that aren't just concrete collections.

I've been meaning to try to push out a Clojure library that shows this but haven't gotten around to it. Maybe this thread will be the kick I need.


Yup, there are always trade offs with every approach. A monadic version of transducers would be neat, and it would be fun to contrast them with the HOF approach in terms of pros and cons. :)


Well hopefully the monadic structure would be hidden. The only difference would be a custom composition operator instead of function composition. You could interop between the two representations with conversion functions.


http://hackage.haskell.org/package/base-4.11.1.0/docs/Debug-...

For future reference, this isn’t a good argument for trolling Haskellers.


>These can be useful for investigating bugs or performance problems. They should not be used in production code.


Well yeah, because a logging statement in production can fail (i.e. network connection drops). The type system forces you to deal with that fact instead of letting you write code that e.g. brings down your server unexpectedly because of some random log call.

Many would consider that a feature, but if you want to #yolo it anyway, like in most other languages, just use trace in prod and call it a day.


The type system forces you to solve this problem in a very specific way by structuring your entire app around pushing IO to the edges. There are plenty of other ways to address the problem that work perfectly fine in practice.

For example, you can specify what should happen is IO fails in your logging configuration. This handles the exceptional case consistently and in a single place without forcing you to structure your whole app around it.

What's more is that there really isn't a sane way to recover from such a catastrophic failure. If your database goes down, or you lose a disk, the only thing you can do is shut down the app. It's not like it's gonna keep humming along with the logging failing silently.

This kind of hyperbole that your either solve all problems via the type system or #yolo is precisely what makes Haskell community so toxic in my opinion.


> For example, you can specify what should happen is IO fails in your logging configuration. This handles the exceptional case consistently and in a single place without forcing you to structure your whole app around it.

In Haskell you can write a function that does "unsafe" logging and handles exceptions if that's the behaviour you want. You can even make that function change its behaviour based on a config file if you really want to.

> What's more is that there really isn't a sane way to recover from such a catastrophic failure. If your database goes down, or you lose a disk, the only thing you can do is shut down the app. It's not like it's gonna keep humming along with the logging failing silently.

That sounds like an argument for including the IO effect in your whole program's structure like people usually do in Haskell, no? If you change a function that doesn't access the disk (e.g. something that just grinds out a mathematical computation) into one that does access the disk by adding logging, you have a new set of possible failure scenarios to be aware of and want that to be visible.

I appreciate that this all sounds very theoretical but it can easily lead to real-world failures. I've seen "impossible" control flow because of an unanticipated exception in a function that didn't look like it could exception cause production issues. I can easily imagine e.g. leaving data in a remote datastore like redis in a supposedly impossible state because your redis-error-handling code tried to first log that an error had occurred and that logging then failed because local disk was full.

> This kind of hyperbole that your either solve all problems via the type system or #yolo is precisely what makes Haskell community so toxic in my opinion.

Every time I've bypassed the type system I've come to regret it, usually when it caused a production issue. It's not hyperbole, it's bitter experience.


All I can say is that we clearly have divergent experiences here. I've worked with statically typed languages for about a decade, I used Haskell specifically for about a year. I've since moved to Clojure, and I've been using it for the past 8 years professionally. My experience is that the team I work on is much more productive with Clojure than any static language we've used, and we have not seen any increase of defects in nearly a decade of using the language compared to similar sized projects we've developed previously in statically typed languages like Java and Scala.

If dynamic typing was problematic we would've switched back a long time ago.


What makes you think the Haskell community is toxic? I usually hear the exact opposite.


I find that the Haskell community is very friendly as long as you buy into their approach to solving problems. However, my experience is that if you question the effectiveness of static typing, or ask for evidence in support of the claimed benefits you'll get a very hostile reaction.

The comment above where pka snidely claims that using any alternative to types amounts to yolo is quite representative. He outright dismisses that any valid alternatives are possible, and he indirectly claims that people using other methods are being unprofessional and are cutting corners. That amounts to toxic behavior in my opinion.


> However, my experience is that if you question the effectiveness of static typing, or ask for evidence in support of the claimed benefits you'll get a very hostile reaction.

That's an interesting thing to say, seeing how representative figures of (specifically) the Clojure community get really defensive really quickly once somebody questions the effectiveness of dynamic typing - which I specifically didn't do.

> He outright dismisses that any valid alternatives are possible

I was merely dismissing your incorrect claim about logging in Haskell. And yes, not taking advantage of the type system when you are already programming in a language with a type system amounts to #yoloing it, in my opinion. Doing the same in other languages is fine, since you don't have another option really.


I'm not really sure what you're referring to by people getting defensive to be honest. People are just telling you that their experiences don't match yours.

You've already conceded that your solution is not appropriate for production, yet you keep saying the claim is incorrect. You can't have it both ways I'm afraid, and it's the definition of being defensive. You can't even acknowledge that your preferred approach to dealing with side effects has any drawbacks to it.

Meanwhile, the opposite of what you're claiming is the case in practice. Other languages allow you to use monads to encode side effects if you wanted to, but Haskell is the language that doesn't give you other options.


> You've already conceded that your solution is not appropriate for production

I’ve done no such thing.

> You can't even acknowledge that your preferred approach to dealing with side effects has any drawbacks to it.

It does, probably not the drawbacks you think of though (“conceptual overhead of types”?).

Generally, you seem to be awfully incompetent in a language you claim to have used for a year. That’s not a bad thing, but you don’t present your arguments with a big fat disclaimer stating that. If you did, I think there would be much less tension in these kind of discussions.


The link you gave literally says:

>These can be useful for investigating bugs or performance problems. They should not be used in production code.

This is perfectly inline with my original claim.

The drawback is that you're forced to structure your app around types, and this can lead to code that's harder for humans to understand the intent of. The fact that code is self consistent isn't all that interesting in practice. What you actually want to know is that the code is doing what was intended. Types only help with that tangentially as they're not a good tool for providing a semantic specification. If you don't understand that then perhaps you're the one who is awfully incompetent in this language.

>Generally, you seem to be awfully incompetent in a language you claim to have used for a year.

I disagree with the philosophy of the language because I have not found it to provide the benefits that the adherents ascribe, and I gave you concrete examples of the problems it introduces. I'm also not sure what you're judging my competence in it on exactly as you've likely never seen a single line of code that I've written in it. Perhaps if you stopped assuming things about other people's competence based on your preconceptions there would also be much less tension in these kind of discussions.


> I'm also not sure what you're judging my competence in it on exactly as you've likely never seen a single line of code that I've written in it.

And I don’t need to. I (and anyone proficient in Haskell, really) can infer your competence with somewhat reasonable confidence based on your comments, like this one.

People who’ve just read LYAH are able to contribute to production codebases already, no problem, but they still may not have even understood basic concepts, such as functors or monads (this is firsthand experience from work), let alone monad transformers, arrows or lenses - which I consider to be a good thing. Based on your comments (and not only this thread here), this is where I’d place you in terms of competency, but of course you are more than welcome to correct me.


>And I don’t need to. I (and anyone proficient in Haskell, really) can infer your competence with somewhat reasonable confidence based on your comments, like this one.

And this is the most hilarious aspect of Haskell community. You assume that the only reason people might not like the approach is due to their sheer ignorance of the wonders of the type system.

>People who’ve just read LYAH are able to contribute to production codebases already, no problem, but they still may not have even understood basic concepts, such as functors or monads (this is firsthand experience from work), let alone monad transformers, arrows or lenses - which I consider to be a good thing. Based on your comments (and not only this thread here), this is where I’d place you in terms of competency, but of course you are more than welcome to correct me.

Nowhere did I state that I have trouble doing any of those things. What I said is that I have not found any tangible advantage from doing it. I found that this approach results in code that's less direct and thus harder to understand. This is similar problem that lots of Java enterprise projects have where they overuse design patterns.

My experience tells me that the code should be primarily written for human readability. Haskell forces you to write code for the benefit of the type checker first and foremost.


> And this is the most hilarious aspect of Haskell community. You assume that the only reason people might not like the approach is due to their sheer ignorance of the wonders of the type system.

No, I don’t, and when I happen upon somebody who is competent in Haskell and still prefers Clojure/Erlang/whatever then genuinely interesting conversations tend to happen.

This is not the case here though.


At this point your whole argument is just ad hominem. You're not addressing any of my points, and you're just making unsubstantiated personal attacks on my competence. I don't see any point in having further interaction.

Have a good day.


Doubting your competence is not a personal attack/ad hominem, so don’t try to twist my words.


[flagged]


This comment breaks the site guidelines in multiple ways. No personal attacks, and no programming language flamewars, please, on Hacker News.

https://news.ycombinator.com/newsguidelines.html


You know he can be wrong about some things but still make a valid point about others. How do you think this reads in relation to his primary claim about the Haskell community?

> People who claim dynamic type systems are superior to static ones can be dismissed in much the same way flat earthers can because both choose to dismiss evidence they find inconvenient.

Seriously. This only feeds into his narrative.


I was going to suggest that yogthos may just be thinking of that one guy who converted from clojure to haskell and had a convert's typical evangelical zeal in #clojure, but then I read your reply. Ironic.


I hope you haven't gotten the impression that it's types or nothing when it comes to Haskell (although given the bleeding edge of the community's tendency to asymptotically approximate dependent types with GHC extensions I can see where the sentiment comes from). Haskell is certainly not expressive enough to push all invariants to the type system and sometimes you just record the possibility of an error in the type and then just do a runtime check. It's a similar phenomenon to Lisp beginners who discover macros and decide to macro everything and anything.

That being said I don't think the "well just let it blow up" is a good one either. It works in a certain subset of cases (when you know that your program can only do certain things and you have a useful supervisor such as in Erlang).

I think that philosophy is part of why Clojure has historically had problems with error handling especially in its own toolchain (e.g. its rather cryptic error messages) and still doesn't really have good tooling or patterns for it (e.g. I think nil punning is a dangerous pattern, throwing useful and specific exceptions feels unidiomatic with the need for gen-class, and it doesn't seem like other solutions have garned much mind share).

Spec is gaining momentum, but seems to still be spewing mysterious messages that require some familiarity to decipher in some cases (although last I checked things seemed to be on the upswing and I'm quite out of date with Clojure at this point).

Even when your app fails you still want to do leave a human readable message and then e.g. log metrics on what kind of error it was and metadata about the error to some supervisor service rather than just let whatever the deepest exception was filter through. "Our authorization request to the database failed with an authorization error and this is the metadata" saves a lot of time compared to an NPE, especially when you're the maintainer and not the writer of the code in question.

The Elm community I think is the gold standard of taking the error path as seriously as the success path when it comes to their tooling and it really pays in my experience when I use it.


In the specific case letting it blow up is really the only thing you can do unless the whole architecture is designed around it. However, my point was that there are many ways to handle that type of problem, and I've seen no evidence to suggest that using types is the most effective one in practice.

When it comes to error messages, I would argue that Haskell ones are no better than Clojure. If anything they're often even less useful because of how generic they tend to be. All you'll know is that A didn't match B somewhere, and figuring out why is an exercise left to the reader.

Personally, I like nil punning and I find it works perfectly fine in practice. My view is that data validation should happen at the edges of the application, instead of being sprinkled all over business logic. If you know the shape of the data, then you can safely work with it.

The idea of doing validation at the edges also applies to functions, if nil has semantic meaning then the function should handle it before doing any nil punning. If it doesn't then it should be safe to let it bubble to a place where it does.

The idiomatic solution for throwing errors in Clojure is to use ex-info and ex-data as seen here https://clojuredocs.org/clojure.core/ex-info

Spec errors are not really meant for human consumption, but there are libraries such as expound https://github.com/bhb/expound that produce human readable output. My team is using it currently, and we're very happy with it. I do think that more could be done in the core however, and it does appear that 1.10 will make some improvements in that department.

In general, the impression I have is that Clojure errors aren't poor due to technical reasons, but simply because the core team hasn't considered them to be a priority until recently.


Oh yeah GHC error messages are bad. Notice how I said Elm instead of Haskell at the end :). One of the things I dislike about GHC is the wasted potential there that's evident in the compilers for newer languages like Rust, Elm, and Purescript when it comes to errors.

I haven't found a good way of switching on ex-info generated exceptions. I usually end up having to do string matching or key searching both of which feel brittle. There's ways of following patterns within the boundaries of my own codebase (e.g. a custom key I know will always be there), but that doesn't play well with the ecosystem at large because there aren't well established conventions around what's what. I don't want to come off as saying there's a fundamental reason why Clojure couldn't have better error handling. It's not a language level thing but rather a community thing. I think Python is a good example of where the community has coalesced around using specific error types even though it's a dynamic language.

I totally agree with doing validation at the edges of the program and try to enforce that in whatever language I'm writing in. I think this is a common misconception about statically typed FP that shows up in e.g. Rich Hickey's talk about types. It's quite rare for things like `Maybe` or `Either` to actually show up in your data structure (e.g. Rich Hickey's SSN example). You usually end up bubbling the error to the top of your module and deal with it at a module level. The type is just there to make sure you don't forget to deal with the error (which is the biggest thing I miss when I'm in languages which emphasize open world assumptions and don't give good tooling to create closed world assumptions; I want to know if I've handled all my errors and all states of my application!).

Yeah the problem is that I've found in the legacy Clojure codebases I've maintained it's rare that nil ever has a unique meaning and often ends up getting reused for a lot of different meanings. For example "A key doesn't exist in this map" and "This data is of a completely different shape than I expected" are different error conditions with different errors that usually both turn nil in the ecosystem.

This lispcast article really hit home for me and sums up some of the pain points I hit using Clojure in production: https://lispcast.com/clojure-error-messages-accidental/. You eventually internalize the compiler errors so they're not a huge deal but the ecosystem at large doesn't have a great story for runtime errors.


Yeah that's a fair point regarding lack of standard error handling with ex-info. It does feel like one of the less thought out areas of the language to me. I definitely agree with the lispcast article in calling the errors accidental.

I'd really like to see something along the lines of dialyzer for Clojure This talk proposes a good approach for that I think https://www.youtube.com/watch?v=RvHYr79RxrQ

It would be great to have a linting tool that finds obvious errors, and informs you about them at compile time. For me that would be an acceptable compromise.

Overall, I would say that it does take more discipline to write clean code in a dynamic language. The problems you describe with legacy codebases are quite familiar. I've made my share of messes in the past, but I also find that was a useful learning experience for me. I'm now much better at recognizing patterns that will get me into trouble and avoiding them.


I'm somewhat wary of approaches like that. I feel like core.typed tried something similar (maybe McVeigh's approach does better inference of unannotated code?) and it's withering on the vine as far as I can tell. IIRC there's some theoretical hints that gradual typing from the direction of untyped to typed rather than the direction of typed to untyped is fundamentally less ergonomic (some type inference stuff becomes undecidable in the former case and remains decidable in the latter and the same occurs for some varieties of type checking). Of course theory doesn't always mean you won't have practically good solutions (since when has the halting problem stopped people from making static analyzers?), but they provide some hints you'll be swimming upstream.

My limited experience with static analyzers is that you also end up with unpredictable breakages of the form "hmmm... so if I leave the variable here my static checker tells me I'm wrong, but as soon as I move the variable down one level of scope it just silently fails to see the error."

Regardless I haven't used Dialyzer myself and I have a good idea of the very very finite number of minutes I've spent with Erlang proper (as opposed to just reading about it), so who knows. It'll be fun to see where this goes. Thanks for the link!

The really big innovation I'm personally waiting for is combining static types with image-based programming (e.g. Clojure's REPL) which seems like an open problem right now because it's pretty difficult to think about what static invariants can and can't be maintained when you can hot reload arbitrary code. That and better support for type-driven programming a la Idris. Working with the compiler in a pull-and-push method (which you can get a crude approximation of in Haskell with type holes and Hoogle and some program synthesis tools, but oh man if even the toolchain for that was mature that would be huge!) was as big a revelation for me as REPL-based programming in Clojure was.


Key difference here is that it's not aiming to be a comprehensive type system, just to catch obvious problems. So if it runs into something it doesn't understand it'll just move on and leave it as is. If it sees something it understands and it's incorrect it will give an error.

Personally, I would find this very valuable because it would help catch many common errors early while staying completely out of the way.

And yeah, I can't really do development without the REPL anymore. I find the REPL makes the whole experience a lot more enjoyable and engaging than the compile and test cycle. It's really a shame that most languages still don't provide this workflow.


Didn't core.typed try to do the same thing? Not provide comprehensive types but just as needed? I never really used it (a coworker did but ended up throwing it out I think). Even if it's exactly the same technically maybe it'll work out with a different set of social circumstances. Maybe if Circle didn't drop core.typed it'd be even more popular now. Never know about these things.

Haha, well that's where you and I differ. The REPL is amazing, but I still want my ability to create closed world assumptions first!


Core typed requires you to annotate everything in the namespace, or add exclusions explicitly. This introduces quite a bit of additional work, and I suspect that's why it never really caught on.

I've read that the author is looking at improving inference in it, and at generating types from Spec, so it might still find a niche after all.

And I understand completely, it's all about perceived pain points at the end of the day, and we all optimize for different things based on our experience and the domain we're working in. That's why it's nice to have lots of different languages that fit the way different people think. :)


you don't need to do either. just run performUnsafeIO if you really want to spit out log statements randomly.

I like having to do logging the Haskell way, actually. it allows for more coherent and easier reuse. What if want to reuse some code, but do logging in a specific way? By allowing the caller to decide where logging goes you can readily accomplish this.


In other languages, if the logging facility fails, you can simply continue running the program without logging. This works reliably enough across the world for many years that no one worries about log statements in production being unsafe.


Some people in the Haskell community think so too. See e.g. http://hackage.haskell.org/package/simple-logger-0.0.4/docs/... where you log pure code without introducing a change at the type level.


Right. If you want to troll Haskellers just ask about the runtime.


This is one of the things I rather like about Scala. It still leaves suitable room for the programmer to decide "that doesn't count", and defer deciding what does / does not count until after they have their application sketched out.

Your "pure" function can become "pure except for logging", "pure except for analytics calls", "pure except for notifications", or whatever you decide.


Haskell 98 is a simple language. Modern Haskell is not. Or, at least, the superset of modern Haskell that GHC implements is not simple.


I don't think it is small or minimal but I believe it to be smaller than Scala


> Nowadays I have a strong preference for simple and focused languages that use a small number of patterns that can be applied to a wide range of problems.

Haskell is a simple and focused language, provided you use GHC with minimal extensions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: