Hacker News new | past | comments | ask | show | jobs | submit | Coolbeanstoo's comments login

I would like to use haskell or another functional language professionally.

I try them out (ocaml,haskell,clojure,etc) from time to time and think they're fairly interesting, but i struggle to figure out how to make bigger programs with them as I've never seen how you build up a code base with the tools they provide and with someone to review the code i produce and so never have any luck with jobs i've applied to.

On the flipside I never had too much trouble figuring out how to make things with Go, as it has so little going on and because it was the first language i worked with professionally for an extended period of time. I think that also leads me to trying to apply the same patterns because I know them even if they dont really work in the world of functional languages

Not sure what the point of this comment is, but I think i just want to experience the moment of mind opening-ness that people talk about when it comes to working with these kinds of languages on a really good team


I’ve been working with pure functional languages and custom lisp dialects professionally my whole tenure. You get a whole bag of problems for a very subjective upside. Teams fragment into those that know how to work with these fringe tools and those who don’t. The projects using them that I worked on all had trouble with getting/retaining people. They also all had performance issues and had bugs like all other software. You’re not missing out on anything.

Many problems stem from people not being willing to learn another paradigm of computer programming. Of course teams will split, if some people are not willing to learn, because then some will be unable to work on certain things, while other will be able to do so.

You mention performance. However, if we look at how many Python shops there are, this can hardly be a problem. I imagine ecosystems to be a much bigger issue than performance. Many implementations of functional languages have better performance than Python anyway.

There are many reasons why a company can have issues retaining people. A shitty uninteresting product, bad management, low wages, bad culture ... Lets eliminate those and see whether they still have issues retaining devs. I suspect, that an interesting tech stack could make people stay, because it is not so easy to find a job with such a tech stack.

However, many companies want easily replaceable cogs, which FP programmers are definitely not these days. So they would rather hire low skill easily replaceable than highly skilled but more expensive workforce. They know they will not be able to retain the highly skilled, because they know their other factors are not in line for that.


> Teams fragment into those that know how to work with these fringe tools and those who don’t.

So the teams self-select to let you work with the people you want to work with? Tell me more!


I’ve been using Haskell professionally off and on, along with other languages, since 2008. Professional experience certainly will help you learn some patterns, but honestly my best advice for structuring programs is to not think too hard about it.

Use modules as your basic unit of abstraction. Don’t go out of your way to make their organization over-engineered, but each module should basically do one thing, and should define everything it needs to do that thing (types, classes, functions).

Use parametric polymorphism as much as you can, without making the code too hard to read. Prefer functions and records over type classes as much as possible. Type classes that only ever have a single instance, don’t have laws, or type classes defined for unit data types are major code smells.

Don’t worry about avoiding IO, but as much as you can try to keep IO code separate from pure code. For example, if you need to read a value from the user, do some calculations, then print a message, it’s far better to factor the “do some calculations” part out into a pure function that takes the things you read in as arguments and returns a value to print. It’s really tempting to interleave logic with IO but you’ll save so much time, energy, and pain if you avoid this.

Essentially, keep things as simple as you can without getting belligerent about it. The type system will help you a lot with refactoring.

Start at the beginning. Write functions. When you see some piece of functionality that you need, use `undefined` to make a placeholder function. Then, go to your place holder and start implementing it. Use undefined to fill in bits that you need, and so on.

Fancy types are neat but it’s easy to end up with a solution in search of a problem. Avoid them until you really have a concrete problem that they solve- then embrace them for that problem (and only that problem).

You’ll refactor a lot, and learn to have a better gut feeling for how to structure things, but that’s just the process of gaining experience. Leaning into the basics of FP (pure functions, composed together) will be the path of least resistance as you are getting there.


I have also initially struggled with structuring Haskell programs. Without knowing anything about what you want to do, here's my general approach:

1. Decide on an effect system

Remember, Haskell is pure, so any side-effect will be strictly explicit. What broad capabilities do you want? Usually, you need to access some program-level configuration (e.g. command-line options) and the ability to do IO (networking, reading/writing files, etc), so most people start with that.

https://tech.fpcomplete.com/blog/2017/06/readert-design-patt...

2. Encode your business logic in functions (purely if possible)

Your application does some processing of data. The details don't matter. Use pure functions as much as possible, and factor effectful computations (e.g. database accesses) out into their own functions.

3. Glue everything together in a monadic context

Once you have all your business logic, glue everything together in a context with your effect system (usually a monad stack using ReaderT). This is usually where concurrency comes in (e.g. launch 1 thread per request).

---

Beyond this, your application design will depend on your use-case.

If you are interested, I strongly suggest to read 'Production Haskell' by Matt Parsons, which has many chapters on 'Haskell application structure'.


> 1. Decide on an effect system

This shouldn't even be proposed as a question to someone new to Haskell. They should learn how monad transformers work and just use them. 90% of developers playing around effect systems would be just fine with MTL or even just concrete transformers. All Haskell effect systems should be considered experimental at this point with unclear shelf lives.

Everything else you said I agree with as solid advice!


> This shouldn't even be proposed as a question to someone new to Haskell. They should learn how monad transformers work and just use them. 90% of developers playing around effect systems would be just fine with MTL or even just concrete transformers. All Haskell effect systems should be considered experimental at this point with unclear shelf lives.

This is highly debatable. I would say that the effect systems effectful and Bluefin are actually significantly simpler than MTL and transformers, particularly as soon as you need to do prompt resource cleanup.

Personally I'd say that if newbies should start with naked IO and then switch to effectful or Bluefin once they've realised the downside of IO being available everywhere.

> All Haskell effect systems should be considered experimental at this point with unclear shelf lives.

effectful and Bluefin are here to stay. I guarantee it. For non-IO-based effect systems (e.g. polysemy, freer-effects) I agree.

(Disclaimer: I'm the author of Bluefin)


> effectful and Bluefin are here to stay. I guarantee it. For non-IO-based effect systems (e.g. polysemy, freer-effects) I agree.

Just to be clear I think Bluefin is really cool and I'm a fan of your work overall.

I'm speaking from the industry/production perspective here. When polysemy and freer-effects were released there was a similar belief that they were here to stay.

transformers and MTL have stood the test of time, are heavily documented, and are pervasive throughout Hackage. Understanding them and how to build programs with them is essential for anyone trying to break into 'production haskell' as a career move.


> Just to be clear I think Bluefin is really cool and I'm a fan of your work overall.

Thanks!

> I'm speaking from the industry/production perspective here. When polysemy and freer-effects were released there was a similar belief that they were here to stay.

Well, from my perspective, I've never worked at a place that's used polysemy of freer-effects.

> transformers and MTL have stood the test of time, are heavily documented, and are pervasive throughout Hackage. Understanding them and how to build programs with them is essential for anyone trying to break into 'production haskell' as a career move.

Sadly true.


> Well, from my perspective, I've never worked at a place that's used polysemy of freer-effects.

I have with another deprecated effect library. Its a bummer to have something so core to the architecture that is deprecated but then to not have the time or buy-in to do anything about it.


That does sound like a bummer. To add some explanation, to explain why I said that effectful and Bluefin are here to say, it's because they're based on IO, so it's easy to get them to interoperate, and if in the future a new "EffectSystemX" comes along, also based on IO, then they will interoperate with that too. Thus the risk of them being deprecated is minimal.

I haven't published my effectful-Bluefin interoperation layer, but you can see it here:

https://github.com/tomjaguarpaw/bluefin/blob/caa59700d910c76...


Someone truly new to Haskell shouldn't use it professionally.

Once you've learned what is necessary to, say, modify already-existing applications, you should be familiar with monads and some basic monad transformers like ReaderT.

Once you're there, I don't think 'choosing an effect system' is a perilous question. The monad transformer library, mtl, is an effect system, the second simplest one after IO.


The original poster said they want to use Haskell professionally but that they are struggling to understand how to structure programs.

> Once you're there, I don't think 'choosing an effect system' is a perilous question. The monad transformer library, mtl, is an effect system, the second simplest one after IO.

I'm aware of that, generally when people say "choose effect system" they mean choose some algebraic effect system, all of which (in Haskell) have huge pitfalls. The default should be monad transformers unless you have some exceptional situation.


> generally when people say "choose effect system" they mean choose some algebraic effect system

This isn't really true. Bluefin and effectful are effect systems, but not algebraic effect systems.


On a software engineering level choosing such unusually deep-reaching libraries unusually soon in the development of a program is a major but uninformed commitment, a dangerous bet that more practical programming languages try to avoid imposing on the user.

I realize I didn't mention monad transformers at all in my original post, I only linked to them!

I should have mentioned that, as you say, monad transformers should be the default effect system choice for 99% of people


This is excellent advice that unfortunately seems to get lost in a lot of Haskell teachings. I learned Haskell in school but until I had to use it professionally I would have never been able to wrap my head around effect systems. I still think that part of Haskell is unfortunate as it can get in the way of getting things done if you're not an expert, but being able to separate pure functions from effectful ones is a massive advantage.

I've used Haskell professionally for two years. It is the right pick for the project I'm working on (static analysis). I'm less sold on the overall Haskell ecosystem, tooling, and the overall Haskell culture.

There are still plenty of ways to do things wrong. Strong types don't prevent that. Laziness is a double-edged sword and can be difficult to reason about.


People love to talk about the upsides and the fun and what you can learn from Haskell.

I am one of these people.

People are much more reluctant to share what it is that led them to the conclusion that Haskell isn't something they want to use professionally, or something they can't use professionally. It's a combination of things, such as it just generally being less energizing to talk about that, and also some degree of frankly-justified fear of being harassed by people who will argue loudly and insultingly that you just Don't Get It.

I am not one of those people.

I will share the three main reasons I don't even consider it professionally.

First, Hacker News has a stronger-than-average egalitarian streak and really wants to believe that everybody in the world is already a developer with 15 years of experience and expert-level knowledge in all they survey from atop their accomplished throne, but that's not how the real world works. In the real world I work with coworkers who I have to train why in my Go code, a "DomainName" is a type instead of just a string. Then, just as the light bulb goes off, they move on from the project and I get the next junior dev who I have to explain it to. I'm hardly going to get to the point where I have a team of people who are Haskell experts when I'm explaining this basic thing over and over.

And, to be 100% clear, this is not their "fault", because being a junior programmer in 2024 is facing a mountain of issues I didn't face at their age: https://news.ycombinator.com/item?id=33911633 I wasn't expected to know about how to do source control or write everything to be rollback-able or interact with QA, or, well, see linked post for more examples. Haskell is another stack of requirements on top of a modern junior dev that is a hell of an ask. There better be some damn good reasons for me to add this to my minimim-viable developer for a project. I am not expressing contempt for the junior programmers here from atop my own lofty perch; I am encouraging people to have sympathy with them, especially if you also come up in the 90s when it was really relatively easy, and to make sure you don't spec out projects where you're basically pulling the ladder up after yourself. You need to have an onboarding plan, and "spend a whole bunch of time learning Haskell" is spending a lot of your onboarding plan currency.

Second, while a Haskell program that has the chef's kiss perfect architecture is a joy to work with, it is much more difficult to get there for a real project. When I was playing with Haskell it was a frequent occurrence to discover I'd architected something wrong, and to essentially need to rewrite the whole program, because there is no intermediate functioning program between where I was and where I needed to be. The strength of the type system is a great benefit, but it does not put up with your crap. But "your crap" includes things like being able to rearchitect a system in phases, or partially, and still have a functioning system, and some other things that are harder to characterize but you do a lot of without even realizing it.

I'd analogize it to a metalworker working with titanium. If you need it, you need it. If you can afford it, great. The end result is amazing. But it's a much harder metal to work with for the exact same reason it's amazing. The strength of the end part is directly reflected in the metal resisting you working with it.

I expect at a true expert level you can get over this, but then as per my first point, demanding that all my fellow developers become true experts in this obscure language is taking it up another level past just being able to work in it at all.

Finally, a lot of programming requirements have changed over the years. 10-15 years ago I could feasibly break my program into a "functional core" and an external IO system. This has become a great deal less true, because the baseline requirement for logging, metrics, and visibility have gone up a lot, and suddenly that "pure core" becomes a lot less appealing. Yes, of course, our pure functions could all return logs and metrics and whathaveyou, and sure, you can set up the syntax to the point that it's almost tolerable, but you're still going to face issues where basically everything is now in some sort of IO. If nothing else, those beautiful (Int -> Int -> Int) functions all become (Int -> Int -> LoggingMetrics Int) and now it isn't just that you "get" to use monadic interfaces but you're in the LoggingMetrics monad for everything and the advantages of Haskell, while they do not go away entirely, are somewhat mitigated, because it really wants purity. It puts me halfway to being in the "imperative monad" already, and makes the plan of just going ahead and being there and programming carefully a lot more appealing. Especially when you combine that with the junior devs being able to understand the resulting code.

In the end, while I still strongly recommend professional programmers spend some time in this world to glean some lessons from it that are much more challenging to learn anywhere else, it is better to take the lessons learned and learn how to apply them back into conventional languages than to try to insist on using the more pure functional languages in an engineering environment. This isn't even the complete list of issues, but they're sufficient to eliminate them from consideration for almost every engineering task. And in fact every case I have personally witnessed where someone pushed through anyhow and did it, it was ultimately a business failure.


> I'd analogize it to a metalworker working with titanium. If you need it, you need it. If you can afford it, great. The end result is amazing. But it's a much harder metal to work with for the exact same reason it's amazing.

What a beautiful, succinct analogy. I'm stealing this.


> I'd analogize it to a metalworker working with titanium. If you need it, you need it. If you can afford it, great. The end result is amazing. But it's a much harder metal to work with for the exact same reason it's amazing. The strength of the end part is directly reflected in the metal resisting you working with it.

I’d say you missed one of the main points of Haskell and functional programming in general.

The combinator is the most modular and fundamental computational primitive available in programming. When you make a functional program it should be constructed out of the composition of thousands of these primitive with extremely strict separation from IO and multiple layers of abstraction. Each layer is simply composed functions from the layer below.

If you think of fp programming this way. It becomes the most modular most reconfigurable programming pattern in existence.

You have access to all layers of abstraction and within each layer are independent modules of composed combinators. Your titanium is given super powers where you can access the engine, the part, the molecule and the atom.

All the static safety and beauty Haskell provides is actually a side thing. What Haskell and functional programming in general provides is the most fundamental and foundational way to organize your program such that any architectural change only requires you replacing and changing the minimum amount of required modules. Literally the opposite of what you’re saying.

The key is to make your program just a bunch of combinators all the way down with an imperative io shell that is as thin as possible. This is nirvana of program organization and patterns.


I'm well aware of functional programming as focusing on composition.

One of the reasons you end up with "refactoring the entire program because of some change" is when you discover that your entire composition scheme you built your entire program around is wrong, e.g., "Gee, this effects library I built my entire code base around to date is really nifty but also I can't actually express my needs in it after all". In a conventional language, you just build in the exceptions, and maybe feel briefly sad, but it works. It can ruin a codebase if you let it, but it's at least an option. In Haskell, you have a much bigger problem.

Now filter that back through what I wrote. You want to explain to your junior developer who is still struggling with the concept of using things other than strings why we have to rewrite the entire code base to use raw IO instead of the effects system we were using because it turns out the compilation time went exponential and we can't fix it in any reasonable amount of effort? How happy are they going to be with you after you just spent a whole bunch of time explaining the way to work with the effects system? They're not going to come away with a good impression of either Haskell or you.


> "Gee, this effects library I built my entire code base around to date is really nifty but also I can't actually express my needs in it after all"

This is why I recommend IO-based effect systems like Bluefin and effectful. If you find that you get stuck you always have the escape hatch of just doing whatever you want in IO. Maybe feel briefly sad, but it works.

(I'm the author of Bluefin)


I'm not terribly surprised. I use it but would describe it as incompetently put together as my bank app? maybe worse, it barley functions at all. I dont know how they managed it.

It was so bad when I used it, if it wasn't bizarre memory leak or privacy issues it was extremely poorly executed UX.

Between it and Fetlife there's some huge issues with those communities just sticking with the first app that emerges regardless of quality


Given the overlap between those communities and OSS people I’m amazed no one has created a B Corp that does this stuff right.

No one migrates from the first big thing so it's a waste of time. Feeld should be killed by this and it'll barely make a dint

When I used it, I enjoyed the community, but the app was never competently written. Then a while back they had a flag day where they rolled out a new app and a new server to everyone all at once, and most people were not able to log in; those that were lost their premium perks if they were paying customers, likes and chats got lost, etc. I was never actually able to log in, and just dropped the app at that point.

The demo video here is quite convincing, I'd seen paperwm before but not really understood the idea of it. May add this to my sway setup :)


I dont have much interest in LoRa, I do think an esp32 device like this but with a modem would be very cool for a simple hackable phone, sort of in the same vein as the tangara mp3 player.


This looks cool, been interested in learning more about compilers since I did the basics in college. Lots of things seem to focus on making interpreters and never make it to the code generation part so its nice to see that this features information about that.


With no disrespect to the book that's the subject of this thread as I haven't read it, but Bob Nystrom's Crafting Interpreter [0] is a fantastic book. It covers all phases in compilation, including both an interpreter and a VM.

It's been covered on several threads here over the years [1].

[0]: https://craftinginterpreters.com/ [1]: https://hn.algolia.com/?q=crafting+interpreters


I remember seeing this a while back. That typesetting is beautiful. Thank you for bringing it up here, I might have to pick that one up.

I’ve been bored with building line-of-business applications, despite designing for complex requirements in high-volume distributed systems.

In fact I took a break from CS learning entirely 9 months ago. Even reading HN. I’ve been studying electronics and analog signal processing instead.

But now that I’ve built about 50 guitar pedals of increasing complexity, I feel ready to switch back to CS studies again.


This book covers compiling to assembly whereas Crafting Interpreters only has a bytecode VM implementation. We'll see how good this book is when it drops, but I think that's a worthwhile feature that Crafting Interpreters punted on.


I'd like to find some good quality but slow nvmes, I dont need super high speed for media serving but getting a lot of storage (4x4tb/2x8tb ~) is much more expensive than hard drives. Itd be nice to have a silent home server


Good quality has a baseline level of speed which is pretty fast. There's not much you can cut at that point.

But cheap SSDs got down to 2x the price of hard drives last year. Even after prices stabilized, they're at 3x. Flash catching up relatively soon seems likely. Flash matching the current price of hard drives seems even more likely.


I think that is too optimistic. The cost / capacity of NAND, even by Western Digital roadmap wouldn't have come down to current HDD price by 2027 / 2028. And that is assuming HDD price dont fall.


I agree. The price difference between SSD storage and HDD per TB is still a multiple (seems to be currently about 5x). That is better than the 10x it was a few years ago; but I don't expect them to be equivalent any time soon.


What numbers are you looking at for 5x?

Making sure I'm looking at new hard drives, I see prices around $15/TB at the cheap end. There are a few name-brand SSDs at $45/TB, many more at $50/TB. They're not high end but even the low end is fast these days.


Just like SSDs are $50 at the cheap end; I have seen HDD at $10 on the cheap end.


Definitely new ones? When and where? I keep an eye on prices and I've never seen that.


Although the ads I saw claimed to be new drives, I have no way to verify that they are not refurbished. Of course, I don't know if the $50 per tb SSDs are reliable either.


It seems that HDD density has stopped improving so fast and you're going to run into a wall on price reduction because due to the extra material and shipping cost involved in producing an HDD vs. a tiny m.2 SSD.


I think it's more that we have different amounts of time in mind for "relatively soon". I'd be happy with that roadmap.

Where can I find info on that roadmap, by the way? I've found a few things but they're very vague about price at best.


Interesting, thanks. I'm quite hopeful to be hard drive free at some point in my NAS


Even the 870 QVO SATA drives are way more expensive than spinning rust.

https://www.amazon.com/SAMSUNG-870-QVO-SATA-MZ-77Q8T0B/dp/B0...

list price of $850 or $624 on sale for 8tb

Still way better than the M.2 NVMe drives which can be well over $1000 each.

However, these SATA drives still dominate HDDs in most metrics like random access and sustained transfer with like 560MB/sec. Not great compared to modern NVMe drives at like 7GB/sec, but for reference HDDs are usually less than half that and the random access metrics can be dire.


99% invisible did a good episode on the loss of Indian vultures and its effects. I found it quite interesting.

https://99percentinvisible.org/episode/towers-of-silence/


Radiolab also did an episode [1] which I really enjoyed, excited to listen to the 99% one

[1] https://radiolab.org/podcast/corpse-demon

edit: I saved this comment to reply back and by the time I did others had posted the same thing


Radio Lab's "Corpse Demon" is great too.

https://radiolab.org/podcast/corpse-demon


Cannot recommend enough. One of my favorite Radio Lab episodes.


A recluse like Kaczynski with the resources of someone like Elon Musk would be world or at very least American society levels of quite interesting


I really enjoyed the article :) Made it very understandable for me a person who struggled with some similar things in school.

Would really appreciate the inclusion of an RSS feed so I can continue to follow along!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: