The more I write software, the more I think errors should be first-class citizens (camp #2 from the OP's post).
I've been using https://github.com/biw/enwrap (disclaimer: I wrote it) in TypeScript and have found that the overhead it adds is well worth the safety it adds when handling and returning errors to users.
That said, I see parallels between the debate about typed vs. non-typed errors and the debate of static typing vs. dynamic typing in programming languages.
> I see parallels between the debate about typed vs. non-typed errors and the debate of static typing vs. dynamic typing in programming languages.
Author of the post here. I also see this parallel in error handling discussions. But seems like it's much harder to sell error handling than static typing. Static typing was also much more debatable in the past then now so maybe same can happen to error handling mechanisms in the future as well.
Your project seems very interesting! Typescript is sophisticated enough to model complex Result-like types that can narrow-widen error cases throughout the code. I will check it more if I find the time.
The biggest problem I see is that, like static/dynamic typing, it's usually a boil-the-ocean problem. Most languages have historically been static or dynamically typed. Only recently have TypeScript and Python allowed for migration from dynamic to static typing, introducing millions(?) of developers to static types in the process.
With errors, it's hard since many languages can throw errors anywhere, so it's hard to feel like any function is "safe" in terms of error handling. That's one of the reasons why `enwrap` returns a generic error alongside any other result: to support incremental adoption.
If you have a chance to check out `enwrap` and have feedback, email me! (link in bio)
My only issue with kitty and tmux is that I always have to copy over my terminfo files manually or else I get a 'xterm-kitty': unknown terminal type error.
I'd say it's less about covering all edge cases and more about showing that text editors are insanely complex, and it doesn't take long to find edge cases that text editors with 10s/100s of millions of users have.
Excited to share Mobile Tethering, our latest feature release on Faraday.dev. It lets you run local LLMs on your Mac/Windows Computer (Linux soon) and seamlessly use them to chat with AI on mobile. Since all the heavy workloads run directly on your computer (instead of on an expensive cloud server), it's 100% free to use, and your chat data is never stored or logged in the cloud.
I'm one of the founders of Faraday.dev, so would love to hear any ideas you have on what we should build next!
__
PS: For those who've never used Faraday – it's a zero-config desktop app for creating AI characters (custom chatbots) powered by locally running LLMs. Faraday can run on CPU with only 8GB of RAM via llama.cpp by @ggerganov, and the app will automatically use your GPU to speed things up. We also have a community-driven Character Hub, text-to-speech, lorebooks, and more.