Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m 15 years into my career and I write Haskell every day. I’m getting a massive productivity boost from using an LLM.




How do you find the quality of the Haskell code produced by LLM? Also, how do you use the LLM when coding Haskell? Generating single functions or more?

I'm in a similar situation. I write Haskell daily and have been working with Haskell for a bunch of years.

Though I use claude code. The setup is mostly stock, though I do have a hook that feeds the output of `ghciwatch` back into claude directly after editing. I think this helps.

- I find the code quality to be so-so. It is much more into if-then-else than the style is to yolo for my liking. - I don't rely on it for making architectural decisions. We do discuss when I'm unsure though. - I do not use it for critical things such as data migrations. I find that the errors is makes are easy to miss, but not something I do myself. - I let it build "leaves" that are not so sensitive more freely. - If you define the tasks well with types then it works faily well. - cluade is very prone to writing tests that test nothing. Last week it wrote a test that put 3 tuples with strings in a list and checked the length of the list and that none of the strings where empty. A slight overfit on untyped languages :) - In my experience, the uplift from Opus vs Sonnet is much larger when doing Haskell than JS/Python. - It matters a lot if the project is well structured. - I think there is plenty of room to improve with better setup, even without models changing.


I'm stuck in my ways with vim/tmux/ghci etc, so I'm not using some AI IDE. I write stuff into ChatGPT and use the output, copying manually, or writing it myself with inspiration from what I get. I feed it a fair bit of context (like, say, a production module with a load of database queries, and the associated spec module) so that it copies the structure and patterns that I've established.

The quality of the Haskell code is about as good as I would have written myself, though I think it falls for primitive obsession more than I would. Still, I can add those abstractions myself after the fact.

Maybe one of the reasons I'm getting good results is because the LLM effectively has to argue with GHC, and GHC always wins here.

I've found that it's a superpower also for finding logic bugs that I've missed, and for writing SQL queries (which I was never that good at).


“GHC always wins” is a nice sentiment. Another similar thing happens when I have written QuickCheck tests and get the LLM to make the implementation conform. Quickcheck almost always wins that fight as well.

Try Claude Code.

Why?

I use similar style as you. neovim with ghci inside, plus hls, and ghciwatch.

Claude code is nice because it is just a separate cli tool that doesn't force you to change editor etc. It can also research things for you, make plans that you can iterate before letting it loose, etc.

Claude is also better than chatgpt at writing haskell in my experience.


Codex CLI with gpt-5-thinking on "high" reasoning is also good to try as an alternative now



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: