Hacker News new | past | comments | ask | show | jobs | submit login
Why F# evangelism isn't working (2015) (ericsink.com)
202 points by luu 4 months ago | hide | past | favorite | 329 comments

(Author here)

Well this is a blast from the past.

Back when I wrote this, I kinda hoped F# would surprise me and gain more traction than I expected. But 8 years later, if anything, it seems like the dominance of C# in the .NET ecosystem has grown.

F# is still a great language, but the main fact hasn't changed: C# isn't bad enough for F# to thrive.

F# always struck me as one of the most terribly underrated languages. I'm a lover of MLs in general, but F# lands on one of the sweet spots in PL space with ample expressive power without being prone to floating off into abstraction orbit ("pragmatic functional" is the term I believe). It is basically feature complete to boot.

I agree on the underrated part.

My theory as an outsider: F# is strongly tied to the Windows world, the corporate world, where a conservative approach is always preferable, on your tech stack and if you need to hire peons coding all day. The corporate world isn't leaving OOP anytime soon, because it's what 95% of engineers focus on, the silent majority which do not frequent HN or play with functional languages in their weekends. The corporate world runs on Java and C#.

If F# had been released in the open-source, flashy and bustling world of Linux and macOS developers, it would have had a much greater success.

I know you can run F# on Linux, but just like running Swift, it feels like outsiders I wouldn't want to bet my business on if I were a Linux-only shop (which I am), however nice it feels. Also a decade ago when it had a chance to take root, Microsoft was still the Embrace Extend Extinguish company. It's not good enough to risk it, just like I'm not gonna use SQL Server for anything.

I am admittedly biased, because although I started programming recreationally in the LAMP-stack world of mid-aughts fame, a huge portion of my professional career has been in C# and the .NET stack.

I think you are grossly overestimating the degree to which the programming language you choose to use to solve a business problem constitutes "betting your business on." How would your business fundamentally change if your first 10k lines of code was in F# as opposed to Go, or Java, or Python, or TypeScript? These are also all languages I've been paid to use, and have used in anger, and with the exception of Java were all learned on the job. This comment in general has big "M$ bad" vibes and if you take those pieces out I'm not sure what the actual criticism is (maybe there is none)?

Aside from the EEE quip, I didn't catch any "M$ bad" vide in GP's post.

I think the situation is clear-cut: until recently, you couldn't really run .net on anything else than Windows, so the only people using it were those already invested in the ecosystem.

Among the people invested in the windows ecosystem, many (most ?) are large "non-tech" companies who hire people who mostly see their jobs as a meal ticket. These people don't have the inclination (for lack of curiosity, or time, or whatever reason, doesn't matter) to look into "interesting" things. They mostly handle whatever tickets they have and call it a day. Fiddling with some language that has a different paradigm wouldn't be seen as a good use of their time on the clock by corporate, or during their time off work by themselves, since they'd rather spend that time some other way.

Hence, F# never really got any traction.

That's for coming in my defense. You are right. I'm not a big fan of Microsoft, but I also don't hate them.

It's pretty simple, really. I am a Linux engineer, and it is not a great investment of time and money for me to get into .NET. I knew F# was cool, but is it cool enough to want to feel a second class citizen, running it on the OS and platform it is not intended to run on? It makes no business sense at all.

> is it cool enough to want to feel a second class citizen, running it on the OS and platform it is not intended to run on?

I'm not a software engineer myself, nor a Windows person, so I don't know the specifics, but FWIW, my client runs production .Net code of the C# variety on Linux, connected to pgsql. It's some kind of web service for checking people's tickets (think airport gates where you scan your ticket to enter), so not exactly a toy, even though it's nowhere near unicorn-scale. It seems to work fine, in the sense that I've never heard anyone complain about anything related to this setup. No "works for me but is borken in prod" or "thingy is wonky on Linux but OK on Windows so have to build expensive workaround".

The devs run VisualStudio (non Code) on their Windows laptops. Code is then built and packaged by Azure Pipelines in a container and sent to AWS to be run on ECS.

I do get it. .NET now works pretty well on Linux.

But it never was a tier 1 platform during its growth. So most non-Windows devs put their focus on other platforms. There is nothing wrong with that.

I could learn .NET now, but I don't really have an interest to do so at this point; Also, the devs you talk about are on Windows, using their tier 1 IDE (Visual Studio) that only runs on Windows, which is my point exactly.

That's a fair point. Tooling is an important aspect of a language, at least for me. I don't know what the VS Code on Linux experience is like for .net.

I tried to dip my toes into F# out of curiosity, and it worked by following some tutorial and VS Code. But it did seem somewhat bare bones. Although I'll admit I'm spoiled by Rust and IntelliJ.

Working for an org who bet on a mix of scala, python, and typescript, I can tell you which languages are being bet on for the rewritten services, and which language is getting in the way of getting things done.

Am I guessing correctly that Scala is "getting in the way of getting things done"?

That's a Texas sized 10-4

What's so bad about Scala? I've only used it for hobby projects myself.

Using it in a context where you need to make money, it's a bad bet. Fine for academic ideas and such things, but really hard to build a business around. And the tooling, community, libs, and docs show how it just can't punch the same weight as other languages when at the end of the day you need to get shit done.

Care to elaborate? Are there any big frameworks in the mix that might have gone from oss to commercial?

We have both Akka and http4s in use, and are migrating to http4s for those services. We need to do more things more quickly with fewer hands. TS and Python are just easier and better tooled for the majority of our (CRUD) work.

scala is being rewritten in .. typescript?

dotnet compiles in general are slow AF on macs, and F# really stood out as the slowest last time I give it a kick.

F# looks wonderful, but unless you’re already in the MS ecosystem, dotnet just feels bad and out of place. And I guess if you are already in the MS ecosystem you’re using C#.

> This comment in general has big "M$ bad" vibes and if you take those pieces out I'm not sure what the actual criticism is (maybe there is none)?

As with almost all "vibes"-related comments, this doesn't hold up. There isn't any criticism; just a positing that the sort of corporate, process-heavy companies that will major on Microsoft programming languages will be the last ones to want to try functional programming languages.

Would agree with this. I don't think the language choice, is as massive bet on the business as people think. I've seen much more niche and ancient langs without an ecosystem (no libraries, no SDK's to popular products, etc) build very profitable products. I would see these languages as a much greater risk.

As long as it has a base capability (libraries, maturity) and when people join they can be productive with it in a month or so then the risk is pretty low. For F# most .NET developers, even Node developers IMO will get used to F# relatively quickly. From my anecdotal experience with a number of languages its probably one of the easiest to onboard out of the FP langs balancing the FP methodology while trying to be practical/pragmatic. It has a large ecosystem via the .NET platform and supplements it with FP specific F# libraries where pragmatic to do so.

When it's time to scale out your team and now you're trying to hire dozens of F# developers it starts to matter a lot more. You can throw a rock and hit a Java developer. I hate the language, but finding other people who can be productive in it is trivial compared to F#.

One of the common threads among companies I've worked at which I would consider "successful" is that they don't really classify developers based on what languages they've used before. If you're a good programmer you can become a net positive in almost any language in a few weeks to a few months, and productive within the first year. Some of the worst companies I've worked for were the type who would toss a resume in the trash because they had 1 year of experience in $LANG from a few years ago and not the "have used it for 3 of the last 3 years" they wanted.

I think it depends on what you mean by "successful". Surely multi-billion dollar financial organizations are by at least some definition successful. They are a complete shit show from a tech standpoint. They are so large they cannot effectively manage specialist developer staff outside of very narrow niches. Standardization when you've got thousands of developers across hundreds of products matters. Maybe some "successful" startup can make things work when they are small. But you'll find they start to standardize when they hit real scale.

.NET has great Linux support nowadays though. I only use Linux and use .NET extensively and have no complaints

Totally agree; F# really feels like a language designed by someone who really does understand the theory, why it's important, but also wanted to make the language realistic to use in industry.

When I was at Jet and Walmart, I never really felt "limited" by F#. The the language was extremely pleasant to work with, and I think most importantly, it was opinionated. Yeah, you can write Java/C#-style OOP in F# if you really want, but it's not really encouraged by the language; the language encourages a much more Haskell/OCaml-style approach to writing software.

Even calling C# libraries wasn't too bad. MS honestly did a good job with the built-in .NET libraries, and most of them work without many (or any) issues with the native F# types. Even third-party libraries would generally "just work" without headache. .NET has some great tools for thread-safe work, and I'm particularly partial to the Concurrent collections (e.g. ConcurrentDictionary and ConcurrentBag),

I also think that F# has some of the best syntax for dealing with streams (particularly with the open source AsyncSeq package); by abusing the monadic workflow ("do notation" style) syntax, you can write code that really punches above its weight in terms of things it can handle.

Now, on the JVM side you something like Scala. Scala is fine, and there are plenty of things to love about it, but one thing I do not love about it is that it's not opinionated. This leads to a lot of "basically just Java" code in Scala, and people don't really utilize the cool features it has to offer (of which there are many!). When I've had to work with Scala, I'm always that weirdo using all the cool functional programming stuff, and everyone else on my team just writes Java without semicolons.

But the basic point of the article does make a reasonable point; part of the reason that Scala has gotten more traction is because Java is just such a frustrating language to work with. Scala isn't perfect but being "better than Java" is a pretty low bar in comparison.

C# is honestly not too bad of a language; probably my favorite of the "OOP-first" languages out there. The generics make sense, the .NET library (as stated before) is very good, lambdas work as expected instead of some bizarre spoof'd interface, there are some decent threading utils built into the language, and it's reasonably fast. Do I like F# more? Yeah, I think that the OCaml/Haskell style of programming is honestly jsut a better model, but I can totally sympathize with a .NET shop not wanting to bite the bullet on it.

Martin Odersky is just a very nice guy and I get the impression that he isn't keen on saying "no", which is how you end up with a language that allows you to use xml tags inline (no longer supported in Scala 3),


The "opinionated" Scala are the Typelevel and Zio stacks, which are very cool.

The problem with the "better Java" approach is that although it has helped Scala's growth a lot, it has also made it susceptible to Kotlin. The Scala code that doesn't use the advanced type magic can be straightforwardly rewritten in Kotlin instead. Kotlin also stops your bored developers from building neat type abstractions that no one else understands.

People who use Scala only has a "better Java" can now use Kotlin has a "better "better Java"".

Yeah, and I think that's why a language like Clojure, which is substantially more opinionated than Scala, has been relatively unphased by Kotlin. Clojure is much more niche than Scala, and the adoption has been much more of the "slow and steady" kind.

People who are writing Clojure likely aren't looking at Kotlin as an "alternative"; while they superficially occupy a similar space, I don't think Clojure has any ambitions of being a "better Java", but rather a "pretty decent lisp that runs on the JVM with some cool native data structures and good concurrency tools". I do like it better than Java, but that's because I like FP and Lisp a lot; if I needed a "better Java" right now, I would unsurprisingly probably reach for Kotlin.

Yep, Scala got a lot of attention because you could kinda write it like Java, and Java hadn't changed much in a very long time - people were looking for a "better Java" - and Clojure obviously isn't that.

Kotlin's whole point is a "better Java", so it's going to grab people who went to Scala for a "better Java". Also Java actually has a sane roadmap and methodology to get better too, so there's that now too - with the preview/incubating JEPs, people can see what is coming down the pipeline.

Yep, I don't dispute anything you said there, I think that's pretty consistent with what I said.

Clojure makes no claims of being "Java++". It's a lisp first and foremost that focuses on embracing the host platform and being broadly compatible with existing libraries and strong concurrency protections.

> without being prone to floating off into abstraction orbit

What do you mean by this?

"Oh, you _also_ need to print something? Lets stack a few monad transformers..."

"But remember that you need the TemplateExplicative and NullUnderstanding compiler extensions!"

You can use eventlog traces, from Debug.Trace [1]. You can (traceEvent $ "look: " ++show bazinga) everywhere you need and then stare at the log to your heart content.

[1] https://hackage.haskell.org/package/base-

No need for extensions, just compile and run your program slightly differently. That's the power of declarative languages.

Not everything is tracing and debugging, sometimes you really need to output intermediate results for "normal", "production" purposes. One could still abuse Debug::Trace, but that would really be ugly.

I also object to that "everywhere". It is far easier to just dump an extra 'print' line somewhere inside a for-loop than into a `foldl (*) 1 $ map (+ 3) [17, 11, 19, 23]`. And that is an easy one...

With eventlog you have lightweight profiling and logging tool for "normal", "production" purposes. You can correlate different metrics of your program with your messages. This is not an abuse of Debug.Trace (notice the dot), it is normal state of affairs, regularly used and RTS is optimized for that use case.

I develop with Haskell professionally. That foldl example of yours is pretty rare and usually dealt with the QuickCheck [1], mother of all other quickchecks. Usually, the trace will be outside of the foldl application, but you can have it there in the foldl argument, of course.

[1] https://hackage.haskell.org/package/QuickCheck

Eventlog is just `unsafePerfemIO*`, so you could just use that instead and not "hide" it and feel better about that.

The "correct" answer to `foldl` would be `scanl` and printing the result of it.


Eventlog traces are RTS calls wrapped into unsafePerformIO, you are right. The trace part of eventlog is optimized for, well, tracing and is very, very lightweight. It is also safe from races, whereas simple unsafePerformIO (putStrLn $ "did you meant that? " ++ show (a,b,c)) is not.

In my opinion, eventlog traces make much better logging than almost anything I've seen.

Right now, developing with C++, I miss the power of Haskell's RTS.

The point I was trying to make was, that if all you want/need is a `putStr`, just use `unsafePerformIO`.

Haskell's (GHC's) Eventlog is nice, but a binary format and not comparable to text output.

> I develop with Haskell professionally. That foldl example of yours is pretty rare and usually dealt with the QuickCheck [1], mother of all other quickchecks. Usually, the trace will be outside of the foldl application, but you can have it there in the foldl argument, of course.

So actually not everywhere. And QuickCheck does something else entirely.

You missed the word "usually". You really, really do not need a print within the body of a loop of any tightness. But you can have it.

The foldl example of yours should be split into a property checking and controlling for expected properties of the input. The first part is done via quickcheck and second part usually is done with assertions and/or traces.

But nothing preclude you from having your trace there, inside foldl argument. This is clearly wrong place to have it, but still you can have it there.

So I repeat, you can have your traceEvents everywhere.

I can't tell if you are trying to defend those languages or just piling up absurdities on the previous post in the style of "yes, and ..." improv.

I am trying to offer counterpoint to what seems to me as an unjust critique from a person who, at first sight, does not know much about Haskell.

Also, a link to a useful library is not a bad thing for anyone curious about Haskell. Thus, there's a bit of education there.

If it looks like improv, I am here every evening till Friday.

You're thinking of Haskell. F# was modelled after OCaml, which doesn't attract monad transformer stacks, and doesn't have a zoo of compiler extensions.

Well, they aren't actually compiler extensions but pre processor extensions (PPX).

And I would really like if OCaml would have had the possibility to add the needed PPXs names to the source file (like Haskell's compiler extensions). So as to not have to read the Dune (or whatever build system is used) file to get to know where `foo%bar` or `[@@foo]` is coming from and what is doing. But at least the usage of `ppxlib` nowadays should make PPXs "compose" aka. not stamping on each other's feet.

https://ocaml.org/docs/metaprogramming http://ocamlverse.net/content/metaprogramming.html

I haven’t used it for some time but OCaml certainly used to have a zoo of incompatible compiler extensions. Circa 2008 or so I once hit on the brilliant idea of using protobufs to get two mutually incompatible halves of an ocaml program to talk to one another only to find that required yet another compiler extension to work.

Are you thinking of preprocessors? Back then, it would have been via Camlp4 and Camlp5.

Aah yes I am

I'm pretty sure F# was modeled on both. There are some definite "Haskell-isms" in F#; if nothing else, monads are typically done in something more or less equivalent to the `do` notation (an `async` or `seq` block), for example.

The syntax superficially looks a lot like OCaml, but it doesn't do the cool stuff with OCaml functors and modules; you write it a lot more like Haskell most of the time.

Here is the "official" history of F#: https://fsharp.org/history/hopl-draft-1.pdf

Don Syme began with a port of Haskell to .Net, but SPJ convinced him that this is a bad idea, so he did choose OCaml. ("The Decision to Create F#", Page 9)

You have to add extra syntax to do very normal things like have more than one expression in a function.

As someone who's coded OCaml for 20 years, I have no idea what you're referring to. `let x in y`? `x; y`? `x, y`? Those are all in the base language.

>NullUnderstanding compiler extension

brilliant! lol

Is brilliant the best word? :P

"now just sprinkle some `map . sequence . map`'s here and there and you are done. who said this was difficult?"

C# in its current state has IMHO acquired many features from F#. You can get close to writing purely functional code in C# now.

I think with all the changes and extensions over time to add ever more features to C# so you can write whatever paradigm you want in it is a bad idea.

I would much have preferred to keep C# OO and use F# when you want to go functional.

Or just create G# which is the everything mashed together language that C# is close to being.

> You can get close to writing purely functional code in C# now.

It seems that FP advocates overlook just how great LINQ is, even if it exists within the impure swamp of regular C#.

I don't really know Linq (and haven't used F# for 2 years, using .Net 5), but isn't F#'s `query` the interface to Linq?

https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref... https://fsharp.github.io/fsharp-core-docs/reference/fsharp-l...

Yeah, that's the F# support for it. The interesting bit is that C# supports much nearly the same query syntax (`for item in something where item.IsInteresting select new { name = item.Name }`) and secretly supports just about any arbitrary Monad you want to write [1]. Just as C# also was one of the first languages to take something like F#'s async { } builder for async/await syntax, as a different Monad transformer (that C# also built to be duck-typable for any Monad that fits the right pattern).

LINQ and async/await alone can be heady "gateway drugs" in C# to learning functional programming fundamentals (including the deep, advanced fundamentals like Monads), and C# only seems to continue to pick up little bits of FP here and there over time.

There are definitely lots of reasons why even some of the biggest FP fans often think "C# is good enough" without needing to reach for F# in 2023. (Which is why the post here, and the comments above in this same thread lament that F#'s biggest problem is the shadow that C# casts.)

[1] Mark Seemann did an entire series of blog post on coding common Monads in both F# and C# and the C# code is very interesting at times: https://blog.ploeh.dk/archive/

I don't dispute your claims, as they are subjective. I do know that I enjoy a lot of the functional aspects to C#, but I think it's something where you need to have a real discussion with your team and decide a coding style and feature implementation for your code. If your team can't all speak the same dialect, you're going to have issues. Having seniors able to work with juniors and discuss the functional aspects, as well as when to use things link LINQ, goes a long way towards a consistent and easily understood codebase. I know not every shop has this luxury, which is why I still agree with your statement.

You bring up good points.

I often think about C.

I learned C a long time ago, I still have the KR book, and if I look at C code I can get a good idea of what is happening, even though I haven't kept up with all the changes.

In C# now, two developers can write code that accomplishes the same but might not be able to read each other's code. I dont think that is healthy.

It is somewhat the Microsoft Office / Word approach to programming languages. Just Keep adding features on top of features.

> I would much have preferred to keep C# OO and use F# when you want to go functional.

What's stopping you? You can mix and match C# and F# on the .Net platform, no?

Yes, its pretty straightforward to have a .NET solution with a mix of C# and F# projects that can refer to each other.

F# also tried to pivot into data science of lately, only to have Microsoft themselves jumping into Python and being the entity that finally managed to convince Guido and others to invest into improving CPython's performance, and possible JIT integration.

Basically having the pivot efforts being sabotaged by the same company.

Microsoft has always been a polyglot company. They also invested heavily in R and I believe even contributed some things to Julia.

I don't think the pivot entirely failed, there's definitely a small niche for "data science, but it needs to run in .NET" and F# still to my understanding fills it well. It's a very small niche and I don't expect to hear a lot of data scientists directly training for it, but there's a lot of advantages in places that use the Azure stack, for instance, for faster/better/more integrated data science when done with F#.

F# would probably need a lot more investment in dynamic types to truly attract a lot of data scientist attention. (Though the .NET DLR still exists and could use some fresh, modern love.)

Relatedly, I appreciate a lot that Microsoft's polyglot approach helped standardize the ONNX runtime, and even if the data scientists I'm working with prefer Python or R, I can still take ONNX models they build and run them in a C# or F# library with very little sweat.

I think if Microsoft would have continued to invest in projects such as IronRuby and IronPython, we'd be much further along in integrating different paradigms in a way that feels more natural, while also continuing to grow the DLR (for both features and performance).

I am only scratching the very surface of data science, but coming from .NET 1.0 and just starting to learn Python, I'm still finding it far easier to use Python for these tasks. It's most likely just the library ecosystem, and I'm hoping that Microsoft continues to add officially supported libraries to .NET for these tasks. ML.NET feels very foreign to me compared to using other libraries in Python, even as a beginner in Python (although I have experience with various languages, but only minimal experience in functional languages, mostly F#).

I don't think it matters how good or bad C# is, Object Oriented Programming is a mess

Learning, how to use an Object System (a tree of objects/classes) is inherently hard

The current problem with F# is that it doesnt do enough to shield you from objects, it does what it can, but still to use F# effectively, you still need to learn some C# and a lot of API that basically Objects inside Objects inside Objects calling Objects calling Objects and more Objects

OOP is bad because eventually OO systems becomes too complex, OO API is intimidation

Separating Data from Behavior manages complexity better

If the only flaw in C# is knowing which method calls requires the new keyword because its a constructor, and which dont because its a factory, that is bad enough to want to avoid it

> OOP is bad because eventually OO systems becomes too complex, OO API is intimidation

This strikes me as a sort of ... reverse of survivorship bias.

You look around and see all complex systems are in OO, then you conclude that it is OO that is the cause of the complexity.

Have you considered that the non-OO designs are deficient in some way that prevents them from being used for the type of systems that you find to be examples of OO being bad?

Not that I am defending OO, I just want to know how you are differentiating between "OO produces complex systems" and "OO is used for complex systems".

> Have you considered that the non-OO designs are deficient in some way that prevents them from being used for the type of systems that you find to be examples of OO being bad?

Having shipped both significant non-OO projects and significant OO projects, their drawbacks were usually related to low adoption. In terms of code and architectural complexity, they were either comparable to OO projects (in specific situations), or better.

That being said, in most situations, language/paradigm choice were not the main drivers of project success. At worst, a bad OO codebase is a drag, not a killer, and the same is true with non-OO projects.

> Not that I am defending OO, I just want to know how you are differentiating between "OO produces complex systems" and "OO is used for complex systems".

OO definitely produces complex systems. And, let me be clear, by OO I mean the social consensus in OO circles, not the paradigm itself or the technical tools. My take is that OO circles host a cottage industry of consultancies and gurus peddling a stream of design patterns, advices, etc. which end up layering in any long-lived OO codebase and create unnecessary complexity.

>OO definitely produces complex systems. And, let me be clear, by OO I mean the social consensus in OO circles, not the paradigm itself or the technical tools. My take is that OO circles host a cottage industry of consultancies and gurus peddling a stream of design patterns, advices, etc. which end up layering in any long-lived OO codebase and create unnecessary complexity.

This right here. Every time I hear mid level dev bring up DDD I contemplate quitting and spending some time looking for a Rust or Clojure gig. Sometimes it gets so bad I think about biting the bullet and going to node.js

C# isn't a bad language, even the frameworks are taking a nice turn towards simplification (eg. ASP.NET Minimal APIs, EF direct SQL queries) but the culture it creates... LAYERS of bullshit :D

Absolutely! It is the misunderstanding and use of heavy abstraction, with "a class per file" that blows these systems into liabilities rather than solutions. Start with a low number of abstractions, as few as you can get away with given your requirements, and then only expand when the requirements change. It really doesn't matter the paradigm, it's possible to heavily abstract a functional system with various transformative functions that aren't truly needed until the data becomes more complex.

There is a whole industry peddling OO systems that are extremely abstracted for the benefit of filling chapters in a book, or producing extra pages of content in a website. I fell victim to both early on in my knowledge and even professional world, but somehow managed to follow what "felt right" and broke away from that to find an easy path forward that allowed me to use the tools I was given in the easiest way possible, and only introduce complexity when the solution was complex (not for the sake of complexity for complexity's sake).

I do feel like OOP introduces a lot of inherent overhead, not necessarily "complexity". I feel like doing anything in Java, for example, typically requires the creation of several separate files, spanning 30+ lines each, much of which is just class decorators and and the like. I do feel like often the equivalent program in something like Clojure will be much shorter, and be contained in substantially fewer files without features missing. So much of the stuff that people love about classes, interfaces, and polymorphism can be done pretty easily with replicated with basic first-class maps and multimethods.

Obviously it's not a direct apples-to-apples comparison; Clojure is an untyped language, and performance for it is admittedly generally a little more difficult to predict. But, and obviously this a sample size of one, but I do feel like my programs have less... "fluff" than the equivalent OOP languages.

But if you convert that Java to Kotlin it'll get vastly shorter, whilst still being semantically the same. OOP doesn't have to mean verbosity. Java chose that path to keep the language simple, like how Go is also very verbose but simple (Kotlin is more concise but more complex).

One of the main issues with this is that OO as practiced in C# and Java is only a very thin extract from the real OO as provided by for instance Smalltalk. And without that kind of environment you end up with the worst of both worlds, where you have an OO like interface layered on top of things that aren't really objects to begin with, because they aren't 'alive'.

Very good observation but I do wonder whether it’s the worst of both worlds or the best of both worlds - more the “eat the meat and spit out the bones” approach?

I feel this kind of argument is a bit pedantic though; when people complain about OOP, they're generally complaining about the mainstream implementations of OOP.

I don't think that people people are really considering Smalltalk's OOP style when they complain about Java OOP.

This is far from pedantic. Calling something OOP when it isn't is a huge part of the problem.

One might say that Real OOP has never been tried.

Erlang and Elixir with spawn processes (objects with their own CPU) and send / receive message passing? But they do their best to hide it in OTP behind all that handle_* boilerplate.

There's Smalltalk. And Ruby gets pretty close to Smalltalk, but yeah, in practice it ends up looking like very close to Java. Your point stands.

Chicken and egg, perhaps? OOO lives and breathes state, so complexity (defined as an excess of state consideration) seems a natural pairing, yet the overhead and complexity is increased by each in response to the necessity of the other? That is, when complexity of the problem shifts, there is a parallel increase in the complexity of the OOO solution.

The general anxiety of the movement towards functional or procedural programming in general might also be a feature of age: a young programmer eager to impress that they can juggle 8 balls effortlessly, but called upon to do the same 15 years later might admit 3 balls sufficed to begin with, and is closer to an attainable sustainable solution.

The worst part of OOP is that all the properties of an object can be a mishmash of values and are mutable. In any method, you never know if the object is in some undesirable state without checking properties within the method itself. Multiply that headache across all methods and all other classes and it becomes a mutable mess. It makes it weird that we pass around objects as types when they encapsulate so much state and logic. They aren’t really a concrete data types, they are an entire living village.

With functional languages, it tries to enforce some explicit type signatures in the function arguments so things are cleaner within the functions themselves.

This isn't a property of OOP. This is a property of poor class design. You absolutely should be designing classes such that every possible sequencing of their public methods leaves them in a valid state and maintains their invariants.

Structs have the issue you describe and they aren't really OOP.

Yes, if the first thing you do when you write a class is make a setter method for each field then you will have problems. That's not really a property of OOP.

Poor class design IS a property of OOP.

All of these logical errors that are easy to commit are terrible because they are usually runtime bugs, not compile time.

As I think of it, I think a neat feature of OOP would be conditional methods that are only callable under specific circumstances. For example, the “Customer.SendPasswordResetEmail()” method couldn’t be called (or didn’t even exist) until I verify that the “Customer.IsEmailVerified” property is true.

Being able to add these type of annotations to methods for expected object state would help catch some logic bugs at compile time.

> The worst part of OOP is that all the properties of an object can be a mishmash of values and are mutable.

Const-ness is one of the things I really miss from C++. I could look at an object and be reasonably sure I wasn't mutating it by calling foo.length() for example.

IMHO that is of such little help and the drawbacks weigh much heavier: const-correctness spreads like a cancer (try making one thing const without having to fix a hundred other things), and often requires annoying boilerplate -- I'm no C++ expert but if these are still the best available solutions...: https://stackoverflow.com/a/123995/1073695

Sometimes I want to add one little extra thing that gets mutated in an otherwise "const" method, e.g. for debugging purposes. If the compiler doesn't let me do that because I valued the ideal of const-correctness higher than practical concerns, I know I've done something wrong.

Perhaps it depends on the code-base. I worked on a medium complexity C++ project (~300kLOC) but which used multi-threading quite heavily with shared data structures, and there was only a couple of instances where I felt it got in the way.

In the vast majority of cases it reduced my cognitive load significantly because I could just look at the method declaration and see that my code would be fine.

Yes, as always it all depends on the context and how features are used. Maybe I was just bitten too often in situation where const is particular gnarly to use. I know for sure that in many cases, such as when calling small helper functions for copying a shallow array and such, one can easily pass pointer as pointers-to-const.

However, IME there is a big problem with const for more database-y, more stationary in-memory data. This is the kind of data that is almost always going to be mutated by at least some part of the code at some point in time. There is a fundamental problem of communication between mutating code and non-mutating code (the "strstr()" function, that has to apply a const-cast hack internally to implement its interface, is a trivial example here).

As said there are certainly situations where such "communication" isn't needed, but I'm anxious about precluding the possibility in the name of const-correctness.

I feel that instead of const (or whatever static formalized description of what a function is doing), good naming is most helpful to intuit broadly what that one function was doing again.

In C at least, I've ended up leaving const almost exclusively for the cases where the data is truly const - i.e. in the .ro section of the binary, and I know for sure it won't ever have to be modified, and basically I have to apply the const qualifier lest it puts the data in the wrong section / it needs an awful cast to remove the const. The majority of those are string literals typed as "const char *".

Lisp derived languages, with dynamic gradual typing, and dynamic scopes, say hello.

Erlang and Elixir store state in arguments of recursively called functions, usually running in their own processes separate from the rest of the application. There is nothing in the language to enforce correctness of the state. They are generally regarded as functional languages even if they are somewhat object oriented if one thinks about their message passing as method calls to the object / process storing the state.

>If the only flaw in C# is knowing which method calls requires the new keyword because its a constructor, and which dont because its a factory, that is bad enough to want to avoid it

I'm sure this is just an example popping first out of your mind, but it seems like an oddly specific thing to mention. Specially since the answer is obvious if you know C#: the name of the method matches that of the type if and only if it is a constructor.

I won't comment on the rest of your post as my experience with F# is minimal; but I think I understand where you're coming from.

> Separating Data from Behavior manages complexity better

There's a sweet spot, and it varies. Sometimes it is difficult to find. API design can be difficult. Managing complexity is sometimes itself a complex process.

Every system if allowed to become too complex. No single paradigm of programming is perfect for all cases.

OO is one way to structure and model a system.

No matter what language you use will end up with some form of a struct, a set of values that belong together Then you will have list of some structs and trees of some structs

You will almost certainly have to create list/collections/groupings of structs. Because those are quite useful and universal

How you act on those collections is different between different idioms.

In other words you will create a model of data one way or another and you have to maintain it / change it, as required over time.

The data structures themselves are rather often based on or more database schema where the data will be extracted and saved.

Just like every language is able to be slow/non-performant -- but OO in this case would be Python in a web context; it doesn't invalidate that a good amount of OO codebases in the wild devolve into incomprehensible black boxes, where no one has any idea what anything does or how to make meaningful changes that fulfill the intent of (compare that to iterative programming, where you can atleast read it)

A list: I give you a vector. Plain and simple. Not this insanity: https://referencesource.microsoft.com/#mscorlib/system/colle... [0] You do not need OO to create a vector (or even an array -- god forbid!)

As for trees: roll your own. They're simple enough, yet tightly-coupled with context that no generic implementation exists that is flexible enough. You do not need OO to create a tree. C has been working with trees long before the current Frankensteination of OO was even a twinkle in Gosling's eye.[1]

Data structures do not need inheritance -- they might need delegation (message passing that requires you to actually think about your system).

Data structures do not need encapsulation -- they most likely need namespaces. Realistically, most classes will be used as namespaces.

Data structures do not need polymorphism -- just implement the members you need, and name them appropriately (no 5+ word phrases, please. Please!)

What modern OO does is lower the barrier to productivity in the present, and then pays for it in the future. It's no different than writing your "planet scale" backend system in JS.

[0] Compare to: https://gcc.gnu.org/onlinedocs/gcc-4.6.3/libstdc++/api/a0111...

[1] If you want to know why we have Java: some guys that didn't have the time to think about low-level (memory management specifically) things for their embedded applications, got sick of trying to learn C++, decided to make their own language. That's it. There was no grand plan or thoughtful design -- it's just a mismash of personal preference. The same people that described C++ as "being too complex" (fair) and using "too much memory" (lol)

What do you find insane about the C# `List` source code?

I'm not a C# programmer, but the public API looks sound, and the entire thing is like 1K LOC including docstrings (I guess the inherited code would add to that).

I don't think it's even using inheritance - List implements a few interfaces though?

List is an IList/IReadOnlyList; these interfaces do nothing that couldn't be done right inside the file itself.



Instead we have to go diving through the IList, which implements ICollection, which implements IEnumerable, which implements IEnumerable (again). Just because each interface is composed of another interface, doesn't mean you aren't using inheritance. You are effectively creating a custom inheritance tree through willy-nilly composition.

It is gratuitous to make this chain so deep, when the underlying code is just a handful of lines.





The doc-strings are unnecessary. It's self-evident what most of the code does if you read it.

        // Returns an enumerator for this list with the given
        // permission for removal of elements. If modifications made to the list 
        // while an enumeration is in progress, the MoveNext and 
        // GetObject methods of the enumerator will throw an exception.
        public Enumerator GetEnumerator() {
            return new Enumerator(this);

        // Returns the index of the last occurrence of a given value in a range of
        // this list. The list is searched backwards, starting at the end 
        // and ending at the first element in the list. The elements of the list 
        // are compared to the given value using the Object.Equals method.
        // This method uses the Array.LastIndexOf method to perform the
        // search.
        public int LastIndexOf(T item)
            Contract.Ensures(Contract.Result<int>() >= -1);
            Contract.Ensures(Contract.Result<int>() < Count);
            if (_size == 0) {  // Special case for empty list
                return -1;
            else {
                return LastIndexOf(item, _size - 1, _size);

        // Returns the index of the first occurrence of a given value in a range of
        // this list. The list is searched forwards, starting at index
        // index and upto count number of elements. The
        // elements of the list are compared to the given value using the
        // Object.Equals method.
        // This method uses the Array.IndexOf method to perform the
        // search.
        public int IndexOf(T item, int index, int count) {
            if (index > _size)
                ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.index, ExceptionResource.ArgumentOutOfRange_Index);
            if (count <0 || index > _size - count) ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.count, ExceptionResource.ArgumentOutOfRange_Count);
            Contract.Ensures(Contract.Result<int>() >= -1);
            Contract.Ensures(Contract.Result<int>() < Count);
            return Array.IndexOf(_items, item, index, count);
If you remove these 300 lines of pointless comments, you still have 900 lines of code that is terribly space-inefficient. Everything is "pretty," but slow to read, because of the immense amount of whitespace, nesting, and lines longer than 76 chars. You cannot read long swathes of code in one screenful. You have to scroll vertically and horizontally, because for some reason a standard library needs to throw exceptions (exceptions aren't free; they negatively and noticeably impact performance).

Seriously, you could just use an "out" errno/status. "But then we would have to always check to see if the operation succeeded!": exceptions make people lazy. Just because an exception wasn't thrown, doesn't mean you're doing things correctly.


Why does a List implement a search algorithm? Why binary search of all things -- because it's convenient? You know if I need a binary search, I can write one myself. Don't pollute my namespace.

        // Searches a section of the list for a given element using a binary search
        // algorithm. Elements of the list are compared to the search value using
        // the given IComparer interface. If comparer is null, elements of
        // the list are compared to the search value using the IComparable
        // interface, which in that case must be implemented by all elements of the
        // list and the given search value. This method assumes that the given
        // section of the list is already sorted; if this is not the case, the
        // result will be incorrect.
        // The method returns the index of the given value in the list. If the
        // list does not contain the given value, the method returns a negative
        // integer. The bitwise complement operator (~) can be applied to a
        // negative result to produce the index of the first element (if any) that
        // is larger than the given search value. This is also the index at which
        // the search value should be inserted into the list in order for the list
        // to remain sorted.
        // The method uses the Array.BinarySearch method to perform the
        // search.
        public int BinarySearch(int index, int count, T item, IComparer<T> comparer) {
            if (index < 0)
                ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.index, ExceptionResource.ArgumentOutOfRange_NeedNonNegNum);
            if (count < 0)
                ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.count, ExceptionResource.ArgumentOutOfRange_NeedNonNegNum);
            if (_size - index < count)
            Contract.Ensures(Contract.Result<int>() <= index + count);
            return Array.BinarySearch<T>(_items, index, count, item, comparer);
What if my list -- as is almost always the case -- is unsorted? The result will be incorrect? Looking through the chain of indirection, I cannot see any code checking to see that the list is sorted. Maybe it's there, but it's so much overhead trying to make sense of the List.BinarySearch -> Array.BinarySearch -> ArraySortHelper<T>.Default.BinarySearch -> arraysorthelper.BinarySearch -> arraysorthelper.InternalBinarySearch chain. So I'm going to silently get a wrong result, and the only way to know is to read the docstrings? Thanks.


As far as I can tell, it's unoptimized. It's just plain, OO C# meant to be readable. I don't see any tricks or tweaks to get the IL to be more concise/performant. Maybe the compiler is aggressively optimized for the core lib (but I'm not holding my breath -- because I can't see it).

I stopped using C# & F# almost a decade ago, but there are some relevant pieces of information that answer your questions:

1. Optimization is primarily handled by the .Net JIT, not the C# compiler. That allows F#, C#, VB.Net and other runtime languages to share similar performance characteristics without duplicating effort.

2. Docstrings are used by the IDE to help the user. That avoids the need to read the source code itself for regular usage.

3. When comparing the .Net List<> implementation against any C++ std::vector implementation, the former looks quite tame in comparison...

Numbering your citations from zero, ehe? I like the cut of your jib.

I empathize with your characterization of "a tree of object/classes" and I yearn for an example of how else to model a complex, domain-specific system not using the aforementioned tree.

Not the author of the comment, but based on how I understand the comment, I feel essentially the same way.

I would characterize it a bit differently, seeing as, for example (and to your point), a purely functional lisp program is a tree of lambdas and macros. The same could be said of Haskell.

For me the issue is that classes and objects are actually pretty complicated things for what they are. It’s easy to not notice when you’re in the habit of using them, but really pause and think about how complicated they are. They have both structure and machinery that probably aren’t required for most abstractions: regardless, in OOP they get shoehorned into every problem.

This is why OOP ends up with a bunch of well known design patterns, whereas in FP they’re not reaaaaally a thing (arguably).

A tree of functions is probably the simplest possible way to build programs, at a fundamental level: I am not speaking in terms of individual preferences here, but really mathematical simplicity.

well you see! What we can do is to namespace our functions, e.g. by naming them component_create, component_add_button, etc. We then create a plain dictionary with key value pairs that gets passed onto these functions! The functions then possibly return a new map, which is a modified map! This allows us to write code like

  dog = dog_create({name: "foo", age: 12})
  dog = dod_add_friend(dog2)
and we can avoid OO completely.

oh... wait a minute

This comment shows a total misunderstanding of what functional programming is…

While tongue in cheek, this is one if the OP in non-OP patterns that is used heavily for large projects in FP alike.

I'm not seeing that in the example, and I'm not even seeing anything very relevant to FP in the example either. I guess there isn't much mutation happening, and functions are called? But that's not what FP is.

This tells me that you never really looked at functional languages, not even used them. The power of ADT, especially when using a comprehensive pattern matching expression, is pretty difficult to emulate in the OOP world without a ton of code. But in this extremely simple case you just need a record.

    let dogBar = {Name = “bar”; Age = 11; Friends = []}
    let dogFoo = {Name = “foo”; Age = 12; Friends = [dogBar]}
    printfn “%A” dogFoo.Friends 
The advantage is that it’s immutable and it’s guaranteed to don’t have null in any fields. C# only introduced records recently, while F# was born with them. And C# still hasn’t got ADT because it’s missing the Union types as far as I remember.

It's not a tree though. A tree doesn't have connected leaves and branches. This is, however, common with classes that might get injected the same dependency

Sounds like missing the tree for the forest. Im not from a pure cs background (so forgive my mangling of terms) but isnt a tree essentially an acyclic graph with constraints, 1 parent 2 children for example? What you're describing is adding some cycles into that graph no?

The number of children can be anything, it's two children for a binary tree. Each node except one node must only have one parent, which isn't true if two or more nodes share one or more children.

And, yes, in theory this adds cycles which aren't allowed. However, since class dependencies are better represented as directed connections (which aren't usually used for trees in CS terms), it isn't a true cycle.

relational model, like we always did and do everyday (in the db realm)

i am not saying we should not use trees ever, i am mainly saying, when the model is a very deep tree (or several deep trees and trees everywhere), its becomes overly complex

data models should be as flat as possible , and only nested when absolutely necessary

Yes, and my yearning was for examples in which the domain objects are complex systems or machines themselves.

To your point, if the domain is a payment system, I can keep separate db's of Customer Info, Customer Purchases, Transaction Instances, Customer payment methods, etc. This seems like a domain suitable for functional code.

If the domain is a two stage orbital rocket, in which we must have a stateful system that has internal feedback loops (fuel consumption, vehicle trim, time of flight, time before stage separation, engine sensor data), our best software design is an object graph which causes spaghetti code ( does the navigation system belong to the electrical system, or the radio system? Wait, does the radio system belong to the electrical system? Wait, does the entire electrical system belong to the solid fuel system, since the electrical system is dependent on the generators partially, but what about the battery system? What critical components stay on the battery system if the generators are shut down?). I guess my point is, real life is a spaghetti relationship.

Consider the recent ISpace probe crash. The article says "software bug" but in reality it's more of a 'design flaw' and I would bet it's exactly because of the topic of this thread. The sensors were reading correct data, but the design/validation of the intercommunication data between sensors was designed wrong.


Documents are pretty much everywhere. In many cases they are mutable because user needs to edit them, and on the web JavaScript code needs to dynamically modify them.

According to debugging tools in my web browser, your <div class=comment> is at level #15 under the <body> element. I wonder how would you model the in-memory representation of this web page, while keeping the model practical?

The big difference between C# and F# styles (yes you can do either style in both languages, but with varying degrees of friction) is if that tree is mutable or immutable.

It is a mess, become most people don't learn how to code properly.

They would make a mess in Modula-2, or Standard ML as well, given how many need a network layer to write modular code.

F# (and ocaml, on which it was modelled) are oop languages. If you don’t like oop there are functional programming languages that might be better fit for you.

No, Functor, Monad is OOP concept.

> F# is still a great language, but the main fact hasn't changed: C# isn't bad enough for F# to thrive.

That's right. I mostly switched to writing "dumb records + service classes" code in C#, and while F# is terser, there's just not enough pain to cause me to switch. When DU's come to C#, the gap will get even narrower.

I used to write lots of C#, but now I consider it a bad ecosystem. The problem is the amount of ceremony, silly OOP abstractions, dependency injection, etc. Just look at building a simple HTTP endpoint in C# compared to Node or even F#!

> Just look at building a simple HTTP endpoint in C# compared to Node or even F#!

It's... basically the same? Include library, create server object, tell server object what request to handle and what to return, start server object.


Yeah, you can get away with minimal C# stuff if you want. Mind you minimal API's is a relatively new thing as of NetCore 6 so GP might not have had the chance to touch these new things yet.

Much of the older overdesigned pain of C# is that it used to be tied to how IIS wanted things to work, NetCore initially pivoted more to DI stuff before pivoting again to these minimal API's but the DI stuff is definitely available(and used) still.

For pragmatic choices it's all there, you can start off with minimal API's and get far with it and once you start feeling pain-points as you want to re-implement things it might be time to add in the more "frameworky" parts.

You can also use DI without having to use interfaces for everything. It's easy and possible to inject straight dependencies from concrete classes into constructors, without all the abstraction (beyond allowing constructors being called for your dependencies and injected into your controllers). I'm a big fan of using services with an unabstracted database context from EF Core, rather than ever making use of repositories. I can still go from a MVC project with concrete views, over to Web API with a completely JavaScript frontend framework, without needing to change any of my business logic. Approximately 10 years ago, I was lost in all the abstraction you'd find in tutorials and any book on ASP.NET MVC, but experience has taught me a LOT, as well as working with other languages and seeing the (lack) of ceremony needed to get things done.

By "used to write", I'm guessing maybe they worked in the .NET Framework/IIS era, which did have a cognitive cliff to climb. You could get used to the ceremony though and then your brain started to ignore it and focus on the stuff that mattered. These days it's much easier though.

I agree this was a problem, but with the current Minimal API's, there is no boilerplate, looks a lot like Express to me!

  var builder = WebApplication.CreateBuilder(args);
  var app = builder.Build();
  app.MapGet("/", () => "Hello World!");

>> The problem is the amount of ceremony, silly OOP abstractions, dependency injection, etc.

Your code snippet certainly has a lot of unnecessary ceremony. Why use a builder object at all? Why use a static class with a function to build the builder object?

  var builder = createBuilder(args);
  // etc...
Would be better. But

  var app = createWebApp(args);
  // etc...
Is even better. No ceremony at all!

There is a lot that gets done behind the scenes in createBuilder(). I understand where you're coming from, but this allows you to override any defaults that you don't like, in order to provide your own. I personally still stick to the standard MVC pattern, and don't go crazy with abstractions. I place my business logic within services and inject those in my controllers, but if you were to run a debugger, you would not have to jump through interfaces and other useless abstractions that were a thing of the past (and present if you follow current tutorials and books). I have used Node.JS, and still use it to provide my frontend developers with an environment using Express to build out templates using Gulp for minification/transpilation/compression for use in Umbraco (a .NET Core CMS). My frontend developers don't need to know C#, and can work in standard EJS templates and HTML, but benefit from SCSS and modern JavaScript. I can then build out the Razor syntax for views, and just drop their CSS and JS files directly into the CMS projects.

Another "blast from the past" for me too..

Your "career calculus" article has been top of mind for me recently as I've talked about it a bunch of people. Amusing how those core concepts don't change much.

Also, you correctly anticipated that Swift would become mainstream long before F#, which happened. Of course hindsight is 20/20, but this wasn't that obvious back in 2015. Your reasoning was sound.

Swift is still very much a niche language. Its a big niche, but niche nevertheless.

> F# is still a great language, but the main fact hasn't changed: C# isn't bad enough for F# to thrive.

C# will always be more popular because it easier to learn. Why? Because it looks familiar to most developers. Why would you learn this unfamiliar thing called F# if C# is right there and you basically already know it? On top of that, C# almost has feature parity with F#.

However, F# is a simpler language than C#. That is a fact. It has less concepts that you need to learn. I've found that onboarding someone in an F# codebase takes a lot less time compared to onboarding someone in a typescript,C#,... codebase. A lot less time. I've found that new people can start contributing after a single introduction. The things they build often just work.

I think that an F# code base costs a lot less money to maintain over longer periods of time. Can't prove it but I think that the difference is huge.

Would be interesting to see actual stats in F# usage which I doubt are relatively available. Given the reaction to this post from what is an old article there's still probably an underground interest in the language and some use in general. People seem to have built strong views on it either way. Especially with some posters admitting they use it professionally with a closed source culture (finance, insurance, etc). Most metrics would not be accurate given interoperability with C# - e.g Google searching I would typically look up C# code and port it for example.

Maybe it doesn't need to thrive for everyone; maybe it just needs to continue being useful for the people who employ it and add value. That's probably OK. They could be just busy building stuff instead of blogging, especially if the community is mostly compromised of senior developers (10 years +).

I think another problem was that Microsoft downplayed F#. They didn't support it in SSIS packages, or fully support it in MVC projects. Those were the main things I did back then. I really wanted to use F# and had the freedom to do so, but had to conclude I'd be faster sticking with C# than switching back and forth.

If you had to chose right now and abandon the other, would you pick LINQ or discriminated unions?

Not OP but I would choose discriminated unions.

Why? because Linq is basically just syntactic sugar for regular IEnumerable methods, while discriminated unions have no equivalent at all.

Even if you wanted to claim that those IEnumerable methods ARE linq, then it would still be possible to implement them with a library while discriminated unions have to be a compiler feature.

Yep, Linq could be replaced by the more general Computation Expressions in F#

F# has a query { } expression for LINQ already.

I'm talking about improving C# by replacing LINQ with the more general Computation Expressions.


The C# compiler "duck types" LINQ so you can already (ab)use LINQ for general computation in C#. You can use nearly any Monad you want with LINQ syntax. It isn't always a strong fit for some types of Monads, but it is more capable than it seems. You might get some funny looks if you do, though.

(Similar with async/await: it is "duck typed" at compile time so you can write other Monads for that, if they make more sense in that form of transformer than LINQ. Or support both LINQ and async/await together.)

There's definitely some more interesting power in F#'s Computation Expressions that can't easily be done even with (ab)using the tools that already exist like that, but it is still interesting what can be done with the existing tools.

I don't think that LINQ has support for appicatives or custom operators. Although yes, it can be abused in impressive ways :)

I really wanted to learn it, but I wanted to learn F# & not C#. The problem is...you can't really learn F# without knowing .NET and how it does all the OO stuff. Even the most basic things that require one easily googleable line in Python would return no results for F#. You just have to figure it out in C# and then you can apply to F#.

Seeing your name pop up was a blast from the past for me too - I used to read back in the "The Business of Software" days.. circa 2005 I think?

This is a nicely written essay and, I think, completely wrong. (One person's experience here, just as a disclaimer.)

I've interviewed a lot of functional candidates in a decade-long stint of functional programming professionally. My interview approach is always the same. All practical exercises, no leetcode here. You can do the exercises in the language you're most comfortable in.

If I had to pick a language that predicted you'll do poorly on a practical interview exercise, I would pick F# every time. For some reason the candidates just do not do well. Now I can think of some confounders - maybe the types of people who would apply to a job with some dynamic programming requirements and people who are good at F# just have no overlap. I've thought about that.

But it seems like this line

    Pragmatists don't make technology decisions on the basis of what is better. They prefer the safety of the herd.
Implies that if someone just saw some code in F# and realized what people can do in it, they would be super impressed. I have not found that to be the case. If that's a general problem and not just a quirk of my personal experience, that has to be fixed first.

That's weird. I use F# because of its pragmaticism. Every other language tacks on feature after feature. I would say my F# code is boring, which is what I love about F#. And it's especially boring compared to other functional languages. The other pragmatic functional languages are Erlang and Elixir.

I would consider most popular languages, like C#, Java, and Python decidely unpragmatic as languages. There's way too many hoops to jump through to concisely describe the problem domain. In F#, I define some types that describe the domain, write some functions, and move on. It's that easy.

I think F# programmers lack that gamut because they get comfortable in the eager execution type safe world and stay there with no particular reason to learn dynamic programming techniques. There is also the effect that it allows less advanced functional programmers to be productive so that in randomly sampling currently active functional programmers the F# programmer is less likely to be advanced.

Scala developers were referred to a Java refugees, Swift developers to Objective C refugees, and F# as C# refugees. A weird side effect of Microsoft doing a better job with C# is that there less of a push to F#. Plus F# by virtue of being in Dev Div had the core value proposition (Ocaml on .Net) undermined by the Win vs Dev Div internal battles that tried and failed to kill .Net.

I have been programming for 20 years, and yet despite having used dynamic languages I don’t actually know what it means to leverage dynamic programming techniques. For instance, I’ve never encountered a JavaScript codebase that I have thought couldn’t benefit from just being statically typed with Typescript. I get the impression that dynamic programming, besides the odd untyped line of code, is best used only for extremely specific cases?

The problem is the word dynamic is overloaded, and I'm not at all sure which one your parent comment meant.

"Dynamic programming" traditionally has nothing to do with dynamic languages but is instead a class of algorithms that are "dynamic" in the sense that they represent time.[0] This might be what your parent was referring to because these algorithms lend themselves well to Haskell's lazy evaluation, and they reference F# as being eager.

That said, they also talk about F# as being type safe, so they could also be referring to dynamic programming languages. The grandparent was definitely referring to this one, but "dynamic programming techniques" sounds much more like the algorithmic meaning.

[0] https://en.m.wikipedia.org/wiki/Dynamic_programming

To be clear I wasn't referring 'dynamic programming' but as you say, the use of a dynamic language, or programming without types, mimicking what I assumed the original poster I replied to meant.

My guess is that interviewees wishing to return to the typed world they are comfortable with would first try to type the JSON they are working with. Given that the JSON is messy this could be an unbounded amount of work that is unlikely to pay-off within the span of an interview.

Ok that is very confusing because "dynamic programming" is a very specific thing, and also super popular in leetcode questions. Maybe half the questions on leetcode.com involve dynamic programming.

It has absolutely nothing to do with dynamically typed programming. It's also a really terrible name for what is essentially caching.

I'm curious about what context would require an untyped language

By untyped I assume you mean dynamic languages? I’m some contexts it’s not convenient to lug around a type checker, embedded languages for example. Other times if doing macro heavy programming (lisp, forth) it’s hard to build a type system that can properly type check the code or resolve the implicit types in a reasonable amount of time.

In the context of JSON you can work on it without types from a typed language. It’s just that as a force of habit coders may chose to spend time adding types to things when they shouldn’t.

> For instance, I’ve never encountered a JavaScript codebase that I have thought couldn’t benefit from just being statically typed with Typescript

That's the type bias. If you look at a non-typed codebase, it always feels like it will be better with types. But if you had a chance to go back in time and start the same codebase in Typescript, it would actually come out way worse than what you have today.

Types can be great when used sparingly, but with Typescript everyone seems to fall into a trap of constantly creating and then solving "type puzzles" instead of building what matters. If you're doing Typescript, your chances of becoming a product engineer are slim.

There is much naïveté among the strongly typed herd. When tasked to create glue code translating between two opposing type systems, which is a very common data engineering task, reaching for a strongly typed language is never the best option for code complexity and speed of development. Yet the hammer will often succeed if you hit hard enough and club the nail back to shape when you invariably bend it.

    There is much naïveté among the strongly typed herd.
Is exactly the reverse also true? Let me try: "There is much naïveté among the weakly typed herd." For every person who thinks Python or Ruby can be used for everything, there is another person who thinks the same for C++ or Rust.

Also, the example that you gave is incredibly specific:

    When tasked to create glue code translating between two opposing type systems, which is a very common data engineering task
Can you provide a concrete example? And what is "data engineering"? I never heard that term before this post.

I'm a data engineer, it's a fairly new role so it's not well defined yet, but most data engineers write data pipelines to ingest data into a data warehouse and then transform it for the business to use.

I'm not sure why using a static language would make translating data types difficult, but I add as many typehints as possible to my Python so I rarely do anything with dynamic types. I guess they're saying for small tasks where you're working with lots of types, when using a static language most of your code will be type definitions, so a dynamic language will let you focus on writing the transformation code.

Thank you to reply. Your definition of data engineer makes sense. From my experience, I would not call it a new role. People were doing similar things 25 years ago when building the first generation of "data warehouses". (Remember that term from the late 1990s!?)

I am surprised that you are using Python for data transformation. Isn't it too slow for huge data sets? (If you are using C/C++ libraries like Pandas/NumPy, then ignore this question.) When I have huge amounts of data, I always want to use something like C/C++/Rust/C#/Java do the heavy lifting because it is so much faster than Python.

Yes, it's definitely a new word for an old concept, same as the term data scientist for data analyst or statistician.

I find Python is fast enough for small to medium datasets. I've normally worked with data that needs to be loaded each morning or sometimes hourly, so whether the transformation takes 1 minute or 10 minutes it doesn't matter. The better way is of course to dump the data into a data warehouse as soon as possible and then use SQL for everything, so I only use Python for things that SQL isn't suited for, like making HTTP requests.

Using a static language to manipulate complex types, particularly those sourced from a different type system (say complex nested Avro, SQL, or even complex JSON) is much more awkward when the types cannot be normalized into the language automatically as can be done with dynamic languages. Static languages require more a priori knowledge of data types, and are very awkward at handling collections with diverse type membership. Data has many forms in reality -- dynamic languages are much more effective at manipulating data on its own terms.

You realize every single thing that dynamically-typed languages can do with data types, statically-typed languages can do too? Except when it matters, they can also choose to do things dynamically-typed languages can't.

Lots of people assume static typing means creating domain types for the semantics of every single thing, and then complain that those types contain far more information than they need. Well, stop doing that. Create types that actually contain the information you need. Or use the existing ones. If you're deserializing JSON data, it turns out that the deserialization library already has a type for arbitrary JSON. Just use it, if all you're doing is translating that data to another format. Saying "this data is JSON I didn't bother to understand the internal content of" is a perfectly fine level to work at.

Stop inventing work for yourself.

Excellent blog post. I never saw it before. Thank you to share.

You can’t do monkeypatching or dynamically modify the inheritance chain of an object in a statically typed language.

But yes, you can have a JsonNode type, which is still better than having every type being “object”.

About monkeypatching, perhaps we have difference definitions. From time to time, I need to modify a Java class from a dependency that I do not own/control. I copy the decompiled class into my project with the same package name. I make changes, then run. To me, this is monkeypatching for Java. Do you agree? If not, how is it different? I would like to learn. Honestly, I discovered that Java technique years ago by accident.

Another technique: While the JVM is running with a debugger attached, it is possible to inject a new version of a class. IDEs usually make this seamless. It also works when remote debugging. Do you consider this monkeypatching also?

Monkeypatching is programatically adding a method to a class at runtime.

> You can’t do monkeypatching or dynamically modify the inheritance chain of an object in a statically typed language.

There's no theoretical reason you can't. No languages that I know of provide that combination features, because monkey-patching is a terrible idea for software engineering... But there's no theoretical reason you couldn't make it happen.

I think you've conflated static typing with a static language. They're not the same thing and can be analyzed separately.

So how would a statically typed language support conditionally adding methods at runtime? Lets say the code adds a method with name and parameters specified by user input at runtime. How could this possibly be checked at compile time?

You could add methods that nothing could call, sure. It would be like replacing the value with an instance of an anonymous subclass with additional methods. Not useful, but fully possible. Ok, it would be slightly useful if those methods were available to other things patched in at the same time. So yeah, exactly like introducing an anonymous subclass.

But monkey-patching is also often used to alter behaviors of existing things, and that could be done without needing new types.

How would you emulate monkeypatching with anonymous subclasses?

You would need another feature in addition: the ability to change the runtime type tag of a value. Then monkey-patching would be changing the type of a value to a subclass that has overridden methods as you request. The subclasses could be named, but it wouldn't have much value. As you could repeatedly override methods on the same value, the names wouldn't be of much use, so you might as well make the subclass anonymous.

In another dimension, you could use that feature in combination with something rather like Ruby's metaclasses to change definitions globally in a statically-typed language.

I can't think of a language that works this way currently out there, but there's nothing impossible about the design. It's just that no one wants it.

So how would you compile code which contains a call to a method which is only defined at runtime?

In a dynamic language, everything is only defined at runtime.

Given that, a sketch of a statically-typed system would be something like... At the time a definition is added to the environment, you type check against known definitions. Future code can change implementations, as long as types remain compatible. (Probably invariantly, unless you want to include covariant/invariant annotations in your type system...)

This doesn't change that much about a correct program in a dynamic language, except that it may provide some additional ordering requirements in code execution - all the various method definitions must be loaded before code using them is loaded. That's a bit more strict than the current requirement they the methods must be loaded before code using them is run. But the difference would be pretty tractable to code around.

And in exchange, you'd get immediate feedback on typos. Or even more complex cases, like failing to generate some method you had expected to create dynamically.

Ok, I can actually see some appeal here, though it's got nothing to do with monkey-patching.

I love using "mixed" dynamic/static typed languages in these scenarios... you can do that data manipulation without types, but benefit from types everywhere else... my two favourite "mixed" languages are Groovy on the JVM, and Dart elsewhere... Dart now has a very good type system, but still supports `dynamic` which makes it as easy as Python to manipulate data.

A major problem with doing data transformation in statically typed languages is that its easy to introduce issues during serialization and deserialization. If you have an object

    name:”ex obj”
    value:”a sample value”
    extraProperty:”I’m important”
And the code

  class myDTO{
    string name;
    string value;

  var myObjs= DerserializeFromFile<myDTO>(filepath)
  SerializeToFile(myObjs, filePath2)
filepath2 would end up with without the extraProperty field.

You can also write code like

  function PrintFullname(person) {
    WriteLine(person.FirstName + “ “ + person.LastName)
And it will just work so long as the object has those properties. In a statically typed language, you’d have to have a version for each object type or be sure to have a thoughtful common interface between them, which is hard.

All that bring said, I generally prefer type safe static languages because that type system has saved my bacon on numerous occasions (it’s great at telling me I just changed something I use elsewhere).

No, it's not.

You can write code in a statically typed language that treats the data as strings. The domain modelling is optional, just choose the level of detail that you need:

1. String


3. MyDTO

If you do choose 3, then you can avoid serde errors using property based testing

"Most" (I mean "all", but meh - I'm sure there's some obscure exception somewhere) parsers will have the ability to swap between a strict DTO interpretation of some data, and the raw underlying data which is generally going to be something like a map of maps that resolves to strings at the leaf nodes. Both have their uses. The same can also be done easily enough by hand as well, if necessary.

Even with a Dynamic Type, it becomes real static type at some point when running, so why wait for the crash to happen at runtime?

I think whether your comment is essentially true or not depends on the language and technology.

If you are truly interested in understanding my point of view -- a great way to do it would be to learn how to use this Clojure DSL: https://github.com/redplanetlabs/specter You could also think about why Nathan Marz may have bothered to create it. As for data engineering, I think ChatGPT could tell you a lot, and its training is dated from 2021.

As someone who tried very hard to incorporate specter into their speech-to-text pipeline, I feel compelled to point out, it gave me a lot of NullPointerExceptions while learning. I don't think it's a great example of the value of dynamically-typed langs.

In retrospect, Marz's hope that specter might get incorporated in clj core was wildly optimistic (even if the core team wasn't hostile to outsider contributions), because it feels like he built it to his own satisfaction, and never got around to removing the sharp edges that newcomers cut themselves on.

It's a shame, because I think specter is a cool idea, and would love to see a language based on its ideas.

I have found that types are a benefit when it comes to debugging complex data systems because it moves component failures closer to the root cause.

Relevant blog post is Parse don't Verify

They keep trying to kill .NET, just check how much WinDev keeps doubling on COM and pushing subpar frameworks like C++/WinRT.

One would expect that by now, out-of-process COM would be supported across all OS extension points, instead it is still pretty much in-process COM, with the related .NET restrictions.

Then there is the whole issue that since Longhorn, most OS APIs are based on COM (or WinRT), not always with .NET bindings, and even VB 6 had better ways to use COM than .NET on its current state (.NET Core lost COM tooling).

Doesn't look to me like they're trying to kill .NET at all. Maybe F# in particular isn't getting the love and attention it deserves but they'd have to be mental to be actively trying to kill off something as popular as .NET

Kill in the sense that from WinDev point of view, the less .NET ships on Windows the better.

In case you aren't aware, WinRT basically marks the turning point started with Longhorn ideas being rewriten into COM.

With WinRT, they basically went back to the drawing board of Ext-VOS, a COM based runtime for all Microsoft languages, hence why .NET on WinRT/UWP isn't quite the same as classical .NET and is AOT compiled, with classes being mapped into WinRT types (which is basically COM, with IInspectable in addition to IUnknown and .NET metadata instead of COM type libraries).

Mostly driven by Steven Sinofsky and his point of view on managed code.

This didn't turned out as expected, but still the idea going forward is to make WinRT additions to classical COM usable in Win32 outside UWP application identity model.

"Turning to the past to power Windows’ future: An in-depth look at WinRT"


Naturally since DevDiv also has something to say, .NET isn't going anywhere.

And since nowadays "Azure OS" matters more than Windows, WinDev point of view is mostly relevant for Windows Server and Hyper-V workloads.

UWP never required AOT, it was an opt-in performance boost (that has since been rebuilt to support all of .NET in ".NET Native").

Today's post-UWP C#/WinRT bindings don't require AOT either.

Beyond WinRT, .NET (Core) has supported all the raw COM, including COM component hosting since at least .NET Core 3.0. It's Windows-Only, of course, to use that, but that should go without saying. It's mostly backwards compatible with the old .NET Fx 1.0 ways of doing COM and a lot of the old code still "just works". .NET has proven that with .NET 5+ and all the many ways it (again) supports raw Win32 fun in the classic WinForms ways. (And all the ways that even .NET Fx 1.0 code has a compatibility path into .NET 5.)

It would have been nice if Windows had stronger embraced .NET, but WinRT is still closer to .NET in spirit than old raw COM anyway.

UWP always deployed via AOT on the store.

.NET Core doesn't do COM type libraries like the Framework does, you are supposed to manually write IDL files like in the old days.

Additionally CCW/RCW infrastructure is considered outdated and you should use the new, more boilerplate based COM APIs introduced for COM and CsWinRT support.

Lots of changes, with more work, for little value.

This seems to be implying that F# programmers do 'poorly' because the language 'protects' them. But, isn't that good? Why have a language that purposely tries to trip you(dynamic), and you are a 'good' programmer if you don't get tripped. This seems to be rewarding people that are good at using a bad language. Since if you are using F# you don't learn the techniques for using other bad languages, doesn't make that programmer bad, it just means F# is more seamless.

Indeed, I’m suggesting an alternative explanation for the given observations based around the absence of a strong selection criteria bias. I’m of the strong opinion that F# is a great language and that people of different levels of skills can be productive in it. As opposed to a C++/lisp combo where only the most careful programmers get to keep both of their feet.

F# is a slice through the language design space that optimizes for developer productivity, other languages with different design choices are optimized and are indeed better for other things.

I think there is a benefit to learning learning ‘bad’ languages as they teach you about the different design trade offs that are available. A person with dynamic programming language experience would have known that the given JSON task was tractable without types and not started the task with a ‘type all the things’ mentality.

I disagree on almost every count: it's a badly written essay, but it makes a valid point.

> I've interviewed a lot of functional candidates in a decade-long stint of functional programming professionally. My interview approach is always the same. All practical exercises, no leetcode here.

Does this mean that you're measuring, like, how fast someone can deploy a CRUD webapp in the given language? I can imagine F# would do poorly on that kind of metric; it's optimized for maintainability and doesn't take unprincipled shortcuts, whereas something like Ruby lets you type one line and it will blat out a bunch of defaults via unmaintainable magic.

> Implies that if someone just saw some code in F# and realized what people can do in it, they would be super impressed. I have not found that to be the case. If that's a general problem and not just a quirk of my personal experience, that has to be fixed first.

I don't think it's a general problem; almost everyone who spends any significant time working in F# returns to C# reluctantly if at all. It's already a "better" language in the sense of how nice it is to program in and how productive a programmer feels when doing so. But those aren't the metrics that matter.

Rails has generators you’re talking about not ruby, and I’ve seen and worked on some very maintainable rails apps.

> Rails has generators you’re talking about not ruby

They may not be built into Ruby-the-language but they're very much part of Ruby-the-development-community.

> I’ve seen and worked on some very maintainable rails apps

I've seen far more upgrade breakages in apps built on rails than any other stack.

> I've seen far more upgrade breakages in apps built on rails than any other stack.

React Native would like to argue with you on that point

That's interesting, because I've tried to use F# several times, and never really felt comfortable. I've written in a bunch of different languages and F# is probably the most disappointing because of how much I want to like it.

I feel like F# deceptively presents itself as simple, when in reality it is closer to C# in complexity. I've written in languages that are actually simple (tcl) and it is a joy. I've written in languages that are unashamedly complex (Wolfram language) and that is also fun. But F# occupies that weird middle ground where it seems easy to do what you want, but for some reason you trip over your own shoelaces every time you take a step.

I wrote F# for a long time, and there were definite phases to learning and becoming comfortable with it. For example, it's often pitched as a functional language but in reality its a functional-first hybrid language on the .NET framework - to be efficient with it is to embrace this and write imperatively when you need to.

Pure syntax wise, its pretty nice, though having a single pass compiler makes it a bit dated feeling compared to other languages.

    though having a single pass compiler makes it a bit dated feeling compared to other languages
Can you please share how F# could improve with a multi-pass compiler?

You could have out of order function declarations.

Forcing linear dependence of files and definitions is considered a feature of F#. In codebases that allow out of order definitions, things get wild real quick.

I found it rather annoying that you cannot organise your code to be readable from top-to-bottom in a file, going from the big picture to finer details. Which, I think is much easier for humans.

That is a matter of habit and not something that is "easier for humans", I think. I've written enough OCaml, which also enforces linear dependence of modules and definitions, that I often find it jarring to read code the other way around and I'd be lost without an IDE that lets me jump to definitions.

I'm not particularly attached to order of definitions within a module, but I definitely like the linear ordering of modules. Without the compiler enforcing that you always end up with cross dependencies between modules, which I think makes it much harder for humans to read the code.

I've had similar conversations about variable shadowing. I find it natural and it bothers me when I can't do it, especially in functional languages, but I have a friend who really doesn't like it and finds it jarring. I think his dislike stems from having learned Erlang before any language that has shadowing, because in Erlang when you reuse a variable name it'll be checking for equality with the value you're "assigning" it to.

I have not found that to be an issue at all. Most F# modules are just types at the top and then functions. This generally does mean that F# modules start with the big picture (i.e., the types) and then move to the details (i.e., functions). If a module is doing something more complicated than that, then I think it's a problem of the module's design likely being too broad.

Yeah I don't mind it. It forces a structure on you. I'm a fan of having common structures in a language.

    In codebases that allow out of order definitions, things get wild real quick.
As I understand, Java allows this. You can effectively have circular dependencies. If true, I have worked on multiple million+ lines Java projects in my career; none of them were "wild real quick". Also, C++ has forward type declarations to effectively allow the same. Again, there are ginormous projects like Google Chrome and Firefox written in C++.

> In codebases that allow out of order definitions, things get wild real quick.

Not an issue in C# at all. Get wild in what way?

It can be an issue in C#. I've seen some mad circular dependencies between files. The file hierarchy doesn't always reflect the namespace hierarchy which sometimes doesn't reflect the actual code dependencies hardly at all and untangling the messes that result from that can be a big deal.

I'm not sure a project gets into that state "real quick", but it remains something that projects can do over time, sometimes without realizing it, especially when a DI abstraction makes it even less obvious how circular a project's dependencies really are.

I'm not sure why those things are necessarily a problem, particularly when with IDEs or at least editors that can parse a language's symbols. We're not programming the dark ages. Do you have a specific example of why these are an issue?

I think wild real quick might be a bit extreme but cyclomatic complexity[1] is a source of unnecessary complexity in software. While not a problem per se, all unnecessary complexity adds up. Leading to the eventual death of the project[2]

  [1] https://en.wikipedia.org/wiki/Cyclomatic_complexity
  [2] https://en.wikipedia.org/wiki/Software_Peter_principle

Ergonomics matter

F# always came across as very weird in the syntax area

Each to his/her own. I've been programming since the 80s and still find C and C-style languages the weirdest of all.


In my experience, the worst performance is by C++ candidates.

Why? Not because C++ is bad, or attracts bad programmers.

Rather, because if that's the language you reach for a quick solve, you probably don't know very much, and that says something.

Curious about what one of your typical "practical exercises" looks like?

I'm not a great fan of f# (it's not rational - but every time I've tried to dip my toes, something in the syntax has felt... Tedious? In a way that for example StandardML does not). But it still seems like an eminently powerful and pragmatic language, so I'm surprised by your experience.

Ed: I see this was addressed downthread:



Fascinating that fsharpers trip up on ad- hoc json wrangling - to be fair i used to agonize over nested lists/association lists in lisp - rather than doing the "professional" thing and YOLO assume that three levels down, behind :user - there's a fifth element :email...

I'd love to know what dynamic programming exercises you're asking interviewees to complete in the timespan of an interview that wouldn't show up on LeetCode.

I think they meant it in the literal sense; "programming requirements that are dynamic", not dynamic programming algorithms you'd use for the knapsack problem :)

"Ad hoc" probably being a less ambiguous wording

It involves a lot of preparation on my part. The ask for the candidate is usually a small service that does some trivial thing in our business domain. The interview problem is usually making a script to manipulate the data, or serve an API endpoint that calls my API and transform the data to match a certain output shape.

Interesting. So is actually executing the script, server, or API call without errors part of the interview? Is that what makes them "practical" or is it because the data structures and algorithms are related to your business domain?

Not trying to nitpick, your comment just piqued my curiosity because you made the point of distinguishing your exercises from leetcode and also stated that those who chose F# were generally poor performers.

It is, but I’m very forgiving of scripts/services that don’t run right the first time if it’s clear the logic is on the right track. If your logic is sound and you’re stuck on an esoteric error, I usually count that the same as completing the exercise. (There have been cases where the person shows no debugging ability at all, which I do treat as a problem. But if you’re reading the error and there’s just not enough time for a fix, eh you were close enough.)

jacamera is on point. Perhaps it's not "leetcode" but it's a seemingly one-dimensional, time-constrained quiz on a specific skill, parsing messy JSON (your words), with you as the sole judge. Personally from experience, when I approach a new API, I recognize it's likely to take a few iterations, based on how that API's data integrates with the rest of my program, to determine the best way to deserialize the API's responses into objects. If I've only got 45 minutes, yea, I'm just going to map it quick and dirty and it's going to look ugly.

Your observation about F# may be valid, but this does appear to be a test of one specific use case for a language, not how productive people are when building entire applications with them.

Why did you take from my comment that parsing messy JSON is the only part of the exercise? It’s just the part that we seem to get stuck on with F# candidates, not the only thing I ask about.

because you mentioned it 3 times

I found F# great for dynamic programming, do you recall what tripped them up?

Parsing realistic (messy) JSON, usually. I would have expected F# to shine at that due to type providers, so it was doubly surprising to me. The F# candidates I've seen spend most of the interview manipulating the data.

This is likely because people don't have enough real-world experience with functional programming paradigm techniques like 'parse, don't validate' and modelling only the parts needed to handle dynamic input, e.g. as shown here https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-typ...

It's unfortunate that they decided to pick F# on such problems when they didn't have the (mental) toolkit to tackle them, but I think it speaks more to people being really eager to communicate their enthusiasm for FP. I wouldn't try to ascribe any higher meaning to it.

I've seen the opposite, and have seen some pretty big fast large professional systems being built in it. Admittedly most of them are closed source, and the community contribution allowed to be made is small. Most of the developers using it normally just market themselves as C# developers when they change job because of some of the above opinions they see online. Have seen speed comparisons compared to other similar tiered languages internally where F# shines with less code as well particuarly around math (C# is just starting to catch up here). Even seen dynamic problems solved in it to your point in a professional setting to speed up very often run calculations to a large customer base.

However technical capability and efficiency of the tool isn't the only concern picking up the comments in this thread so far. The herd effect sadly can make people nervous creating a barrier for people to try it especially if career is in it. What other people think of it, and their preconceived notion can matter.

This is, from a distance, fascinating anecdata. I would expect F# itself to excel in this area (type oriented development being a huge selling point), and people who’ve chosen it to be attracted to those parts of F# which should excel for it. I’m often surprised when my at-a-distance expectations don’t meet reality, but not so often astonished by it.

How messy is the JSON exactly? Type providers are awesome when the schema is consistent.

But tbh, from what little I know, I'd be expecting you to expect me to solve the issue from first principles. So using a "and then magic" technique might be something interviewees shy from.

Hahaha, "when the schema is consistent". The strong typing herd keeps thinking it can smash reality into a square hole.

This sentiment is surprising. Doesn't Python crash a lot at run-time, specifically because some 'dynamic types' clash. They become a real type at some point, at compile time or at run time. Why wait for the crash to figure it out?


Those F# candidates would have been be better to use C# to do the adhoc data wrangling and F# for algorithmic part of the program.

Isn't ad-hoc data wrangling the feature of F#? Why are so many people here against F# for reading/manipulating ad-hoc data, that's what it is good for.

I find I spend a lot of time building records for JSON data. Type providers are nice but I've found them to be a little untrustworthy.

Normally now I ask Chat GPT to create an F# record from the JSON. Then go through and check everything line by line, redo the types to something sane and use that.

But if it were a quick script to do a one off then a type provider will work well.

TypeProviders don't do well with messy JSON and are finicky at the best of time, the last thing I'd rely on during an interview.

Depends on what you mean by messy? Non-conforming JSON? A custom FParsec parser might be able to sensibly extract the data. If it is conforming to JSON then you'd use normal F# code to work with the standard JSON parsers.

It’s syntactically valid JSON.

I don't know then, LinqToJson is a pretty good starting point for F#

There should be a thing call "Yeetcode" that rewards you for deleting as much code as you can and still solving the problem.

> Implies that if someone just saw some code in F# and realized what people can do in it, they would be super impressed. I have not found that to be the case. If that's a general problem and not just a quirk of my personal experience, that has to be fixed first.

Maybe I am mistaken here, but it seems to me that the line you are quoting from the essay implies the opposite of what you said.

>If I had to pick a language that predicted you'll do poorly on a practical interview exercise, I would pick F# every time

As someone that has just spent a while learning F# and really enjoys it, this makes me sad. I hope that I don't have some trait that drew me to F# that also causes me to be less competent.

Don't be sad. This is definitely a skewed perspective from a single person which makes it worse than useless as a way to gauge a random persons competence.

I'm sure you could find a person who would claim this same exact thing for any language if you searched around a bit.

To derive a useful conclusion you would need to control for all of the factors involved in the interviews and objectively measure their performance compared to other people's, anything less than that is completely useless at best.

I took this post about F# talent to really imply -> F# has language features that protect or handle 'some concept', and then when the programmer was asked to do something else, they had trouble because they wouldn't normally have had to deal with it in F#. It isn't necessarily the F# programmers skill level, if dealing with other languages and needing more code to cover gaps, and knowing the gaps.

Can be a great Java programmer, now go do something in assembly, doesn't reflect on the Java skills.

I don’t think you have any good reason to think you’re inherently less competent.

The most important thing is to always be learning. You’ll never be done, embrace that and rejoice in it. Most people more or less stop learning in early adulthood: don’t do that, and you’ll eventually be ahead of the pack.

> They prefer the safety of the herd

I.e. social proof

Are you implying this is a F# problem? What language do candidates do well in?

Is F# too approachable due to its closeness to C#, popular with C# professionals, and therefore not approached as a functional programming language as much as a C# extension?

I've seen Elixir, Python, Javascript, Java, Scala, Clojure, C#, Go, and more; not sure who is at the top, but F# is 100% at the bottom (just in my personal experience.) I wouldn't chalk it up to being too approachable.

I did think F# had a pretty steep learning curve. So, not approachable. But how did it compare to Scala/Clojure? Those seem to have similar problem with being approachable, and adapted.

Just curious what kind of career did you have that enabled you to have a long stint as function programmer? The fact that you don't use Leet code makes it also sound you are not FAANG so makes it more intriguing.

I wonder if that is because people who pick obscure languages are more interested in looking cool than getting the job done.

Like in an interview setting, even if they say you can choose any language, it is probably a safer bet to go with something the interviewer has probably seen before.

Progamming language evangelism is basically a zero-sum game.

Some languages don't really intrude on one another's territory — e.g. not that many people are rewriting Ruby programs in Rust — but some very directly compete.

So if you want to convince someone to use F#, you have to convince them it's significantly better than some other closely-related language. And that's hard!

I have a strong suspicion that the next decade will see a reduction in programming language diversity. JavaScript/TypeScript and Python will become even more popular, to the detriment of everything but Go, C++, and Rust.

Platform-specific languages like Swift will persevere, as will Java, which is unkillable, but the vast array of languages will become less vast.

> not that many people are rewriting Ruby programs in Rust

Just an anecdote (and definitely not "many") but one thing we've found is that ruby and rust actually fit fairly well together. The way I express something in Ruby (often FP stuff like `map` and `filter`) translates very well in Rust. Much better than it translates into (as an example) golang.

Beyond expression, the tooling in rust (cargo) feels very familiar and intuitive. Ruby -> Rust in that regard has felt much more natural than Ruby -> JVM/golang/C++.

Don't get me wrong, it's not a trivial jump, but I've been pleasantly surprised by much of the developer experience.

You might find it interesting to know that Yehuda Katz, author of bundler, was hired to write cargo. There has been a lot of cross pollination between the Ruby and Rust communities.

Yep. Katz, Klabnik and many others (whose names surely don't start with K) have a cultural presence that is certainly felt.

> JavaScript/TypeScript and Python will become even more popular, to the detriment of everything but Go, C++, and Rust.

Why do you see the popularity of C# waning?

Rust might take its place if the tooling would be on par. Personally I'd jump.

You are the first person I have heard mention the desire to transition from C# to Rust. Usually, it is C++ to Rust, or Java to C#.

    if the tooling would be on par
I am betting person, and I say: "It never will be." It is easy to overlook the importance of better developer tooling. It directly translates into higher developer productivity. Every time I am forced to use a language with worse developer tooling, I am reminded of this performance hit. This is why Microsoft spends so much time and efforts on its developer tools. Also: Better tooling means below average developers can level-up to average. This is most of corporate computing, so a win for the bean counters.

Also: There is not single, commercial, controlling force behind Rust, like Java and DotNet.

Don't interpret this post as anything against Rust. I think it is a brilliant language.

Agreed, I think people are sleeping on the relative disparities in tooling between langs, and it will only get worse.

I feel this daily as a Clojure programmer. I've been playing with Copilot, and it's astonishing how much worse Copilot is at generating Clojure code, than say, Js. The difference is probably due to training volume, and if AI-assisted coding is worth it at all, the benefits will primarily accrue to the largest languages.

At the office, a teammate and I recently had to write some C# again using Visual Studio. (Normally, we write Python and Java using JetBrains' IntelliJ.) Even then, we could both notice the developer experience was slightly worse in Visual Studio, than IntelliJ. That said, there are lots of nice (new) language features in C# to make-up for the difference!

In case you haven't seen it already, JetBrains also makes a C# IDE called Rider [0].

Personally, I find the JetBrains IDEs overly complicated. More so than Visual Studio, but that may be familiarity. (Also, I use VSCode for .NET these days so full IDEs feel heavy to me anyway.)

[0]: https://www.jetbrains.com/rider/features/

Good luck having the whole .NET Windows ecosystem rewritten in Rust.

Eh. It is vast, but the tide is shifting. Non-UI components were never that good. WPF is in maintenance mode. And there is a growing need for deep learning which is currently a total lackluster in .NET

Still, more likely to be Python and C++ than Rust.

See ONYX, DirectML and ML.NET announcements at BUILD.

Python and C++ aren't replacements for C# due to different issues with them (which are not present in Rust).

Neither ONNX, nor DirectML, nor ML.NET excel at deep learning. TorchSharp is the closest to what is needed for the modern ML work, and there is almost no investment in it - I believe there's just one person working on it, and even that is part time.

The only thing that keeps me on C# is lack of Rust debuggers that would be able to evaluate arbitrary Rust expressions including full support for traits.

Rust and C# do very different things

There's a huge amount of overlap. They are both general purpose languages - as such they are in competition. I would say there are more applications where either would be a reasonable choice than there are where one is the obvious choice and the other isn't a reasonable option.

C# will stay because Microsoft will push it. Honestly what makes Go so special? I could see another language or even Java or C# get their AOT story together and retake what was lost to Go.

> Honestly what makes Go so special?

Google branding, and yet another language from UNIX creators.

None of its direct influences, Oberon-2 and Limbo, achieved market success when ETHZ or Bell Labs were pushing them.

Docker and Kubernetes successes made Go unavoidable on DevOps space.

Garbage collected "modern" language that builds native binaries? There is very little that can compete with that. I don't really like Go, but if I ever need to write a binary I will reach for it, because I don't really want to do manual memory management (even the rust type). My needs are rarely that performance oriented.

Like I said, Java and C# have single binary AOT builds coming down the pipe. Shops could end up gravitating away from Go for this reason.

Crystal, for example, would be a better Go for me.

Languages are so fundamental, it's philosophy, and tools of expression. Debating better ways of doing these things is something that naturally happens in our heads and by extension in discours - the motivations aren't necessarily extrinsic.

Even if your suspicion is right - since so many new programmers enter the field, absolute community sizes can still easily grow and hence we get more real-world viable languages.

Personally I think we've witnessed an acceleration in the coming and going of languages. Go, Rust, TypeScript, Clojure, Elixir, Zig, etc. And C++ is being dethroned in many more application areas than seemed likely only few years ago. And the GPU realm is still nearly entirely untouched territory. Plus the AI craze may yet stir the soup significantly.

The thing about AI generated code is it may do what you want or it may do something else. It's hard to know which is the case. That means you the human programmer must verify the AI generated code. And to be able to do that, you must understand the code written by AI. That suggests that languages which are easy to read )i.e. to understand) will have an advantage in the AI era.

but also, some languages really overlap directly, like elixir and ruby, or C and java and javascript. so maybe is not "zero sum" between every langauge

and even more so for beginners, most languages only show their differences at a much deeper level.

Agreed. Tweaks: I don't think C++ is going to be relevant for much longer, Rust solves the problem space better.

Python ... maybe? It's so different than the rest, I can see it going the way of Ruby once AI bindings improve in other languages. I can also see Julia dominating the AI space.

The "better than X" languages though? C#, F#, Scala? Done. I don't even hear about Kotlin that much anymore, Java is starting to adopt the good stuff from them.

I would love for Rust to take over C/C++, but in embedded it's still got a while to go, and it's not even really Rust's fault per se. It's the tooling and third party driver problem that's the real issue. It's getting there, sure, but it's getting there by rebuilding the world, which is... like I said, it's a while away, IMO.

> Agreed. Tweaks: I don't think C++ is going to be relevant for much longer, Rust solves the problem space better.

I would say that in the domain of game development C++ (which is where I'm guessing that most new C++ development is done) has such a moat that Rust in it's current form will not be able to displace it.

There's also MojoLang[0] that brands itself as a alternative to Python, and Google will look to release Carbon soon as well. It'll be interesting to see how these two grow.

There's also Odin[1] that looks promising.

I don't think C# is going anywhere, F# on the other hand :shrug: is at the mercy of MS - they always seem to be on the fence about it.

[0] - https://www.modular.com/mojo

[1] - http://odin-lang.org/

Mojo is a language that thinks it will impress the Python programmers with its ability to implement matrix multiplies directly in it. I don't think it will be that easy, but it might replace Cython.

Applications are open for YC Winter 2024

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact