Some time back, someone here on HN posted a comment about rewriting a Java app in... Java. Same basic story of large savings, without changing language at all.
Studying the actual effects of language is hard (time/effort/budget beyond constraints), so people don't do it.
: "It is faster to make a four-inch mirror then a six-inch mirror than to make a six-inch mirror."
My favorite is when they make another post half a year later saying they changed from Y to Z.
It seems that following the trend and using the latest and greatest tools doesn't necessary mean the new tools are better, but that you got to rewrite your applications from scratch, with much more knowledge about the problem domain then before.
Sometimes I wonder if the real subtext is "we have such high turnover that almost nobody was around when the first system was designed. We rewrote it in a new language, and now we're all much happier because we all understand it much better, having been involved in its design." Wait a year, repeat.
2 years later...
LinkedIn: Our iOS app is native! Simple and fast! We save money!
Very rarely do I hit anything near to an optimal solution the first time out of the gate.
1st iteration: 'does it' (barely) but kinda looks hacky
2cnd iteration: better. actually works.
3rd iteration: looks clean, concise, pro, something to be proud of!
3rd time is a charm in software.
It's unbelievable how much clearer and more concise the code is 3rd time around.
If you can ever actually get away with this on a project ... highly recommended.
Many C++ game engines also moved away from huge class hierarchies of game objects to property-based game objects that were collections of behavior handlers.
What is clear to me is that Rust does incredibly well without it, but wouldn't be hurt by having it. Most of the egregious sins of OOP are mitigated by having far better abstractions available in Rust. ML-style modules, ADTs with pattern matching, type classes, first class functions, etc. Much like scala, if you give them capable FP, OOP stops being abused and starts being used appropriately.
Rust traits and Scala traits (or abstract classes) are more or less equivalent with this one exception. I was just pointed to a proposal that would allow for this in Rust, but it is not yet implemented . Essentially, you would be able to define type members, and then "link" them to a member of the struct/enum that is being "impl"-ed. While ergonomically not quite the same as scala, it is effectively the same as inheriting an abstract class with a constructor parameter.
(The gist was that avoiding mutation and keeping everything purely functional allowed them to roll back and forth in time with no problem, or try out multiple different futures. That's somewhat orthogonal to inheritance.)
The modern counterpart is probably scalability — I see a lot of parallels in all of the Google/Facebook-envy applied to “big data” problems which can fit on an iPad.
If people keep trying to force code organization around the business jargon, they will keep getting the same awful result. It does not matter if they are writing OOP, Abstract Data Types, FP, or direct bits manipulation with assembly.
Unfortunately, explicit interfaces were a very late idea (at least in terms of implementation in a major language, not sure if the idea was around earlier) in OOP implementation, and wasn't on v1 of any major language as a core approach, so we've got a bunch of languages where we accept multiple inheritance being messy or we don't have MI at all to avoid it being messy.
But if you had exclusively explicit access to interface (including inherited) methods, MI would be clean.
- Apple, MacApp
- Microsoft: MFC, and others
- Borland: I forget what they called their stuff
- Anyone remember Taligent? Apple, HP and IBM, all getting together in a money-burning party...
- NeXT / Apple revival: NextStep / OpenStep, etc.
- Any number of minor players, including folks with Honest to Goodness Smalltalk implementations (none of which have survived to this day, I believe)
- Java stuff that I have mercifully forgotten
... they were nearly all crazy, and nothing was portable. So much for the promise of OOP :-)
What we can do is compare similar types of projects in different languages. And the things we can say there are pretty significant. For instance, at my last job using Angular we experienced a particular bug in production a couple times. In my current job our frontend is written in Haskell. I don't make definitive statements that often, but in this case I can definitively say that there is a ZERO chance that bug will happen in our codebase. I can say that because the type system guarantees that that class of bug can't possibly happen.
The fact of the matter is there never will be absolute proof of these questions. The naysayers will always have straws to grasp at.
Every time I've been involved in a rewrite, including ones using the same language before/after, the outcome has been good. The act of doing a full rewrite is where the benefit comes from, it's hard to separate that from a language switch.
So, anyone doing this should consider trying to get a sample of people who are more or less indifferent between the options, so they don't sabotage the test
Right, yes. Perhaps this entire exercise is a complete waste of time. Perhaps we should be investing in skilled people who understand their problem domain and then just trust them to do the best they can, rather than trying to find silver bullets inside programming languages.
As we were going to do the rewrite anyway, we considered more alternatives than just Python. For programs that have to be reliable and maintainable, I prefer a language with a strong static type system. Haskell has that, and it has a few practical benefits over other tools that made it an excellent choice for our use case.
I think that is one of the big factors stopping that from actually happening with OO code.
A rewrite is a lot harder if you tap into what exists elsewhere in the codebase.
I bet you have a lot more insight into this than me though.
I was also until recently quite convinced OOP was the only way to go, but I'm seeing signs everywhere that a lot of the design-problems I've met over the past few years are at least magnified by OOP.
The abstraction promised by OOP is a good thing, however very few people are able to consistently make good, reusable and maintained abstractions. To the point where it becomes a weakness rather than a strength. I don't want a billion wrapper-objects that obfuscate my code, and makes the surface area of it bigger than it has to be. Often I struggle understanding code more because of how it was separated, than because of the complexity of what it actually does.
I liked Rich Hickeys talk "Simple made Easy"  and "Brian Will: Why OOP is Bad" 
In the end, it's all up the the programmer.
(The actual argument for functional programming is its adoption by elite programmers like Standard Chartered's Strats team, Facebook's anti-spam group, and Jane Street as a whole.)
That said, isn't it also fair to say that with a better understanding of the requirements and problem, you may determine that a different language / paradigm is a better choice, in the same way you may decide a different class division is better?
Of course Haskell is more mature, has support for multithreading and STM, compiles to native, so it's more performant. But PureScript integrates with JS libraries and seems "fast enough" in many cases. I think it's more interesting as a project too: the lack of runtime and laziness means the compiler code is orders of magnitude simpler than Haskell's, so I could see a larger community building around it if it catches on.
Given that they were on Python earlier, I wonder if PureScript would have been a better choice.
Aside from apps at work, I made some simple physics demos with it http://chrisdone.com/toys/ Perfomance seems good.
It may end up really ugly though: how would you define operators while preserving JS semantics (e.g. no currying)?
Now I'm curious :)
Check out ClassyPrelude. It's a (n opinionated) alternate Prelude that wraps many things up into much more "modern" interfaces. `head` has been replaced with `headMay` (which, as you can figure, returns a `Maybe a`). Most functions can now handle `Text` fairly seamlessly. For an application developer, it's fantastic.
Tell me, have you ever used foldr in Purescript? It just doesn't lead to reusable logic there, so I have no idea why you would.
But in Haskell, foldr is used everywhere. Laziness means that logic built with it is actually reusable.
I don't understand your point. Why is laziness a requirement to write small reusable functions? Are you thinking about currying?
OCaml is (relatively) similar to Haskell and is not lazy. Function currying does not require laziness.
It's also hard to explain, but if you're used to Haskell, working with an eager-by-default language such as OCaml or ML is mildly annoying. You can adapt, of course, but it does seem as if gluing stuff together is trickier with eager evaluation.
There are downsides to lazy-by-default, of course.
(Not a quick read, but not too huge, and it is a classic that is well worth reading sometime).
Elm, another haskell-like, _strict_ language, models entire applications around the strict left fold over events https://guide.elm-lang.org/architecture/
We actually do the same in Jobmachine. The application is driven by a strict left fold over incoming events and current state.
I hear many good things about Purescript’s effect system, but I haven’t studied it in detail. This is definitely one of the areas where there is room for improvement in Haskell.
Regarding the type class hierarchy and head being partial, those weren’t really an issue in practice.
You might be happy to hear there is a PureScript native compiler. I'm also averse to JS things, I use PS but don't use the node-based tools to build it.
> No messing around with virtualenvs and requirement files. Everybody builds with the same dependency versions. No more “works on my machine” and “did you install the latest requirements?”.
I wonder why the Python ecosystem, which is much more mature, doesn't provide a build tool as delightful as Stack (which is less than 2 years old).
There was (and still is a little bit) of resistance to the whole idea of Stackage from the community; people liked the idea of build plans magically being figured out on demand, it's an interesting research problem (it can be hard to let go of an interesting problem when a solution side-steps it all together). I believe eventually many people changed their minds after experiencing the simplicity of using Stack and not having build hell be a substantial part of their development cycle.
Python would likely have to go through the same process. Although with Stack and Yarn providing frozen builds (and QuickLisp); the idea has some precedence, which makes it an easier idea to sell. I mean, Debian and a bunch of other OSes do it like this, but experience shows programmers don't pay attention to that.
But when writing applications with hundreds of dependencies, manually figuring out a mutually compatible dependency range for all packages just isn't an option. At least not if you want to spend time prototyping code, rather than think about dependency ranges.
hpack solves additional problems with the .cabal format (sane defaults as opposed to build failure), and I highly recommend it, for application development at least. I just discovered it a month ago and now I wouldn't be able to live without it.
Stack was created because not everyone is a domain expert. A lot of people don't want to be domain experts. They just want something that works without having to know all the details. It was only able (in the business sense) to be created because so many people look at Haskell skeptically anyway, and take any excuse to back away from it. The people behind the development of stack also run a major advocacy initiative trying to get people to use Haskell, so they found it to be an important thing to build.
You don't need to try to get people to use Python. It's already broadly accepted. When people run into trouble, they just say it's the price of using Python, and aren't willing to make the exchange of giving up power to get rid of a minor inconvenience. So there's no business incentive in the Python ecosystem for making the tradeoffs stack does in the Haskell ecosystem.
> Stack is a mysterious "solution" to a problem
There's nothing mysterious about stack. It's just a group of people who step up and say "I am responsible for package $x" and then work together to find stable sets of versions that are guaranteed to work together.
The whole process happens out in the open, for example here is an issue tracking a compatibility breaking change in a common HTML library: https://github.com/fpco/stackage/issues/2246
More like "leads you to typically exercise less control". You can override versions of packages in a stack snapshot.
Why use an entirely ad-hoc freeze file when you can start from a known-working snapshot (that some of them might already have installed on their machines!) and modify it from there. I find this the perfect option in this kind of situation, and so object to saying that stack is just for non-experts.
Oh, you just need another layer of abstraction! Install rustup, and then (from memory, might be slightly wrong):
rustup install 1.15.1
rustup run 1.15.1 cargo build
Single binary to deploy, no dependencies except libc, life is great.
Also note that by "no runtime installed" I mean no runtime as in "no Python runtime", "no JVM" etc. not necessarily "no libc"
Specifically things like xorg libs, libmpeg, libsdl, and such. Not that Rust would have a problem interfacing with them, just that they would need to be present regardless of whether or not someone was just trying to run a distributed binary.
Agreed that you wouldn't need a VM like CPython or the JVM. However, Rust isn't unique in that department. Almost all languages that compile to binary executables have this advantage.
That's why stuff like that is AFAIK usually either distributed with the binary or is absolutely required to have present on the system, regardless of the PL, if you want to/can only distribute a "naked" binary.
> Agreed that you wouldn't need a VM like CPython or the JVM. However, Rust isn't unique in that department. Almost all languages that compile to binary executables have this advantage.
Didn't mean to suggest this is unique to Rust, which is why I wrote
> because Rust is a compiled language.
I thought you were implying that compiled binaries do not have dependencies. Now I can see that is not the case.
I hope I'm not being too pedantic but Python's ecosystem is much larger than Haskell's, it isn't really more mature. Haskell and Python are very similar in age as languages go.
* Millions of person-hours being poured into a language...
* ...Over a long enough time period that the language can go through several develop-eval-improve cycles - that take real world use cases (And not one-liner bubble sort implementations) into account.
In this sense, it doesn't matter whether or not Haskell was invented in 1890, or 1990. #2 is required for maturity, but so is #1.
(I am not a huge Python fan.)
Diversity matters. A language that one person tinkered on for thirty years is far less likely to be useful, then one that ten people tinkered on for three years. Or, in the case of Python vs Haskell, a hundred people who tinkered on it for twenty-five years.
Python's ecosystem is certainly much more complete, and stable in the sense that radically new concepts don't appear every day.
Haskell's ecosystem is more reliable in the sense that this feature you are using will probably not disappear in a year, and libraries have less conflicts.
Could you imagine if this wasn't the case? The hurdle to actually get people excited about a language such as Haskell especially moving from something like Python would potentially be huge. Kudos for already having that problem solved.
A course on functional+logic programming is often placed in the 2nd or 3rd year of a typical European 3-3½ year CS degree.
If you're having to "secretly introduce" tech, and "get away with it", that suggests there are unnecessary and unproductive constraints on your work; maybe even suggesting that you'd get in trouble for actually daring to make things better.
We had been planning on replacing Scheduler for a while now, and had already written down some mumblings about what the new design should look like. We were also already discussing whether we would switch away from python back then.
I think the exact opposite of what you are saying is true. We got the freedom to experiment with something new, and to actually make things better along the way.
Sure, and from what I read I mostly took it that way. My original point was just that maybe a bit of caution would be good in the choice of title. If I was just skimming through the titles on HN, or skimming the article, it could be easy to get the wrong impression of channable.
There's a tradition of programmers laying claim to subversively Making Things Better in spite of the bean counters. Sometimes, it is even true, as far as it goes.
It's kind of funny that build reproducibility (which was a major issue before stack) is one of the strong point.
I wonder if, for your project, using cloudhaskell would have been more appropriate.
I have a feeling some of the problems you found could have been solved with that.
While this is nice, of course, I'm not sure that is outcome is unique to Haskell/Stack. It seems like you could accomplish a similar level of reproducibility by building a Docker image or bundling dependencies in some other way.
It is not clear to me how Docker solves the issue of pinning dependencies; I would rather have a file that states the exact version of every package to install, than an opaque blessed container image that has some versions installed, and I do want to have the versions used under source control. Generating the image would not be reproducible (in the sense of having the same Python files inside it) without pinning versions somewhere anyway, right? Or am I missing something obvious?
This isn't a huge issue, but still it's nice in declarative systems like Stack and NixOS not to have to worry about that kind of thing.
> * We use all of the five different string types. It is annoying, but it is not a major problem.
cs and the OverloadedStrings extension is all you need, in my experience.
Also, a compiler refuses to compile your code if you make a typo in a field name.
There is a RealWorld that runs on top of IO and a FakeWorld that runs on top of pure State for unit testing.
This means that we have to wrap every single API into our own "SupportsRedis" and similar APIs, but in the end I think it's worth it! Unit tests are super fast and not intermittent at all.
I have tried various haskell plugins for vim in the past, but they always tended to break so I gave up fixing my config and threw them all away.
Now it's just plain vim (with some non-haskell related plugins) Next to it I have a terminal that reruns tests when a file changes : `stack test --file-watch` . It's simple but it always works.
I'm not sure if the vim stuff got any better lately, I haven't checked. So if you have any suggestions, please tell :)
You can do beautiful interfaces with eg. java also.
But where is the meat where anything actually happens? I rarely see that in these posts.
Yes I could look up the source but I don't have time to read through it randomly.
This looks just so nice and stuff just magically works?:
:: (MonadBackingStore m, MonadLogger m, MonadIO m)
-> m WorkerState
And monads to boot! (are monads haskells equivalent of java factories? I kid, I kid :)
In reality the signature is a bit uglier, I simplified it for the post because the point was about effects. In particular we also pass in the configuration, Redis connection details, and a callback to manipulate the TBQueue.
Am I understanding correctly that this is because, while you can lift e.g. runFetchLoop to something of type IO m (), it's not possible to convert use forkIO on it since it requires an input of type IO ()? Isn't that just a consequence of the fact that Haskell has no possible way of knowing if your side effects can be contained in the IO monad?
If the implicit configuration is updated, there's no way to communicate that across threads. The same is true with all the other things monadic layering can provide. How do you call a continuation that points to a different thread? That doesn't even make sense.
So.. Why lie in your type and pretend that those things all make sense? Why not make the type explicit about what makes sense and what doesn't? That way, when someone wants to do something that has no a priori way of making sense, they're required to define how to handle it, such that it makes sense in their specific use case. And that's what the post says they did.
All in all, it's things working as designed. Places where you need to stop and think are set up such that you need to stop and think to use them, instead of barging ahead unaware of the issues.
At the shallowest level, we can't pass `m ()` to forkIO unless m ~ IO, 'cause the types don't match.
But beyond that, there is the question of how that extra context would be passed through. For something like ReaderT this is straightforward. But consider StateT - `set` in one thread can't be visible in the other.
The naïveté in this simple statement is so cute.
The list of concerns is also pretty naïve. The main problem you are going to encounter with this project is hiring. If you want to grow this project or if the main developers leave the company, I bet it will get rewritten in a different language in no time.
I also encourage you to find _any_ experience report that tells of difficulties finding the right candidates for a Haskell job.
The point of the code snippet was simply to highlight how nice the Haskell build tools are compared to Python's.
More on this: https://www.reddit.com/r/haskell/comments/1f48dc/what_does_t...
All of the answers seem insufficient. Basically you can't estimate Haskell run-time unless you are very familiar with the internal Haskell engine.
Writing extremely performant Haskell is a very specialized skill. Happily, Haskell is still extremely fast even without fine optimizations.
It depends on what you want to do: if you're writing a moon lander and don't know anything about GHC internals you may be overreaching yourself=) But for most things like web apps etc. knowing the basics is enough.
Why did you choose to write your own, regardless of the language?
I'm guessing he is a Python developer and likely he is no longer the lead.