Hacker News new | past | comments | ask | show | jobs | submit login
What is the case against F#? (2009) (stackoverflow.com)
59 points by rfreytag on Feb 15, 2021 | hide | past | favorite | 105 comments



The case against <insert non-mainstream language here> for most people is the chicken-and-egg problem of the mainstream adoption of the language.

It should be relatively obvious at this point that the outright merits of a language have little or nothing to do with its popularity, at least when compared to things like the corporate backing and will behind a language (see: Objective-C / Swift), and other factors such as positioning and luck (e.g. Javascript). So, you cannot start from "this language is _clearly_ better, so it will certainly become more popular".

And this is the problem for most people / dev shops choosing a language. Popularity brings a huge amount of good things, such as good quality training materials, job security, easy hiring, good quality libraries, etc (and a few negative things), so it is kind of a big deal.


I don't like to shy away from languages just because they won't ever become one of the top 5 languages. Moderate success for a language can often be more than enough to keep the things built on it afloat for a long time.

I also suspect that the relationship between how many people are successfully using a language, and how often you hear about it on the Internet, is non-linear. And most the popular rankings (TIOBE, SO developer survey, etc) are really measuring how often you hear about it on the Internet.

For me, the real killer feature that every language must have is an escape hatch. This is every bit as true of AAA languages as it is of up-and-coming ones. Give me a way to expose a C ABI or a COM interface, and I'll feel more confident I won't get trapped. (Though I still cry myself to sleep at night knowing POSIX doesn't have a great answer to COM or WinRT.) For one project I'm working on, I'm currently feeling very trapped in Java, of all things. Because, while Java has decent enough facilities for calling out, it's much more difficult/expensive/both to call into Java from other platforms.


I totally agree with you, when it comes to my own personal projects. (In fact, the number of languages I have relatively seriously dabbled in is getting to the point that it's best to leave off a CV, lest one comes off as a jack-of-all-trades or something.)

But for companies, it's pretty hard to put together 3-4 teams of developers in some esoteric language. And to find developers for it many years down the line for maintenance. So, they understandably stick to mainstream stuff, even if there are significant inherent advantages to something off the beaten bush.


My sense is that that is unnecessarily self-inflicted. Companies' focus on hiring people who already know the language they use is penny wise and pound foolish.

Even if a new developer is a complete novice at a language, I would still expect that to be less effort than learning the business domain, getting to know the codebase, and socially integrating oneself with the team.


> It should be relatively obvious at this point that the outright merits of a language have little or nothing to do with its popularity, at least when compared to things like the corporate backing and will behind a language (see: Objective-C / Swift), and other factors such as positioning and luck (e.g. Javascript). So, you cannot start from "this language is _clearly_ better, so it will certainly become more popular".

Well... it's complicated. I would say that for a new language to succeed now, it has to be clearly better at something - some niche, or some programming paradigm, or some such. It also has to meet the minimum bar of having libraries that cover much of the normal stuff that we expect libraries to cover. (That's where corporate backing comes in - it pays for building all the libraries that people have come to expect.)

But if the language isn't clearly better at something, then it's clearly worse, because I can hire people for a mainstream language, or I can have trouble hiring people for the offbeat language. If the offbeat language isn't enough better to make that a worthwhile trade-off, then why not use the mainstream one?


> It should be relatively obvious at this point that the outright merits of a language have little or nothing to do with its popularity

I beg to differ. People had 25 years to abandon Java but since then, the usage has only grown. I remember when Groovy and Ruby had their hyped periods sometime in the 00s, but what happened to that wave in the end? Grails and Rails became history, FAANG nowadays runs on Spring, the same for the rest of the world. Every practical feature that a JVM language had at a moment in time was later adopted by Java in some way. (maybe with the exception of Clojure which is the only language that didn't try to be a better Java).


You didn't describe anything meritorious about Java.

You described the network effect of large companies converging on one language.


I don’t really see how Rails is history. It’s still one of the most popular web frameworks.


When I was learning F# I felt the biggest barrier was lack of good documentation. I’m used to languages like node.js, ruby, and elixir where APIs/guides are straightforward.

programming language maintainers need to think about UX.

I’ve started building a web server library[1] in F# to address this. Other libraries took for granted that I had no knowledge of .Net and required lots of .Net boilerplate.

[1]: https://wiz.run


This is great. I appreciate the "for dummies" approach in the tutorial

Noticed one typo in https://wiz.run/api/#api-run — the second `setHost` should be `setPort` but also probably you don't need either of those in that `run` example


Why You Should Use F# (2018)(HN): https://news.ycombinator.com/item?id=26142696

Introduction to the 'Why use F#' series (2012) (HN): https://news.ycombinator.com/item?id=26142662

... in case anyone thinks I am against using F#. Important to know the pros-and-cons.


F# is a lot like OCaml. But OCaml still struggles with multiprocessing, while F# can run on the .NET infrastructure which has this solved.


I really don't see F# as being a lot like OCaml in any practical way.

It's like, if you're sitting in a hardware store having a hard time choosing between buying a Ryobi brand miter saw and a Ryobi brand power drill for your current project, maybe it's time to take a step back to better define what you're planning to do, and then come back to the store. Sure, since they're both Ryobis they do have a lot of things in common, but that doesn't mean you'd ever substitute one for the other.

I won't list every difference, because there's a lot, but, to an approximation, the intersection of their two feature lists looks more-or-less like SML's feature list.


They are similar to the point of being identical relative to comparisons between either and any other language you can actually get a job using (although I have a soft spot for it, SML does not fall in this category as far as I can tell from a quick search).


What are some situations where you need parallelism? I am not convinced there are many. I have never heard anyone say they are using .NET instead of JavaScript or Python in order to get parallelism.


Web server programming is inherently multithreaded. I'm not sure if that's a convincing answer, but web performance is definitely a common rationale to use .NET or Java over Python or JS.


Inherently parallel, sure. How often does that parallelism need to be within a single process? Not that frequently in my experience. If your threads aren't sharing state they can just be processes instead. If they are sharing state that's going to cause trouble if/when you have to scale beyond one process.


Programmers use frameworks that offer everything they need out of the box. They do need shared state for sessions. But they don't sit down and think if they're going to use threads or processes. They just choose a popular, feature-full framework.

Java and .NET frameworks use threads, rationale being threads perform better. I won't go into that discussion, even if I do have an opinion. Right now, I'm just pointing to what others say.


Any situation where you want to keep latency under control and thus asynchronous programming is not adequate.


OCaml is still pretty fast and part of RHEL. How good ia linux support for F#? And how hars is it to get mono installed?


Mono? Thought dotnet core now ships[0] for Linux:

0: https://docs.microsoft.com/en-us/dotnet/core/install/linux-d...


Linux support for .NET in general, and F# in specific, is good. You don't need Mono at all as of .NET Core a couple of years ago (and the situation has only improved going into .NET 5).


Linux has been a first class target of F# and C# since .NET Core


Don’t use Mono. Install the .NET 5 SDK and you get F#. .NET 5 is .NET core renamed.


F# is only functional language supported by big corporation. It is not Go level but still it is safe to say that Microsoft has some serious commitment to this language.


> F# is only functional language supported by big corporation

It might be the only functional language developed “in house” at a major corporation, but it's not the only one supported by a big corporation, as Microsoft (under the GitHub name) is a leading sponsor of the new Haskell Foundation.


As well as employing Simon Peyton Jones.

Jane Street and OCaml would be another good example of deep corporate involvement in an ML family language.


They see it as a better Python for Data Analytics/ML/etc.

After playing around with it and porting some of my python work to F#....they may be right.


I've been coming to the opinion that I'll just never be happy with nominal static typing for data science or engineering. I'm curious about structural typing, but, of the languages I've tried so far, only dynamic typing has kept me happy for the long run.

I say that as someone who, prior to getting into the data space, was a fanatical partisan of static typing.

For actually implementing the core bits of analytics and ML tooling, I have a hard time seeing past languages that can match the performance of Fortran/C/etc and are able to expose a C ABI. Because those languages let you write one central, highly-optimized implementation that everyone can access from their favorite higher-level language.


It's not _quite_ meant specifically as a better Python, but it can play that role and there is every intention of making that something that has "product truth" to it. You can look forward to some concrete improvements along those lines this year, specifically in the notebooks tooling space and some library support!


In terms of my current use, the new scripting feature with "r# nuget:.." is game changing because now we can freely share .fsx files and they...just work. Compared to praying to the pip/virtualenv gods in the python space.

Really excited to see what ya'll have planned for 2021


Oh yeah, it’s a great feature. People like it a lot more than we thought they would, and we already had high hopes for it! Still more improvements to make there, though.


Just need some firm locking/pinning and I think you have nailed truly reproducible analytical workflows.


You can actually do that today by specifying a version number. The docs cover it here: https://docs.microsoft.com/en-us/dotnet/fsharp/tools/fsharp-...


This is exactly what I'd like to use F# for. However, I don't know if there's a good ecosystem for F# that gives me anything over Julias, for example.


Check out the new features for making feature full scripting files (.fsx)

https://devblogs.microsoft.com/dotnet/announcing-f-5/#packag...

I am starting to use this to port some of python data munging scripts. Especially those that have to call external apis because Fsharp data providers are voodoo magic.

https://fsprojects.github.io/FSharp.Data/library/JsonProvide...


ML.NET might do it, if that is a domain you care about.


Erlang/Elixir and Chez Scheme are backed by large companies.


“Most 'developers' don't understand functional programming concepts, and can't even write very good imperative code in C#. So what hope have they got of writing good functional code in F#?”

i hope 11 years later we have a different attitude


What's wrong with that attitude and what do you think should be better by now?


I can't speak for melling but as for me, that attitude struck me as dismissive not only of the capacity of the average developer to understand fp, but a bit of fp itself as too hard. 11 years later, fp concepts are more generally understood, accepted, and used; and the attitude has shown to be incorrect. fp is not that hard. It is, in fact, easier than OO


Well, SOME ideas have proven easy to understand. The use of higher order functions like Filter/Map etc are definitely widespread now and have proven their usefulness.

The idea of keeping functions pure, when you can without making a mess of things, has pretty wide agreement.

The use of Option types instead of Null is getting to the point of being widespread (C# 9 sorta does it, Zig kinda does it, people love it in Rust and F#)

But other things like, passing partially applied functions to other functions, that to me is still really hard to reason about, especially with type inference on those function signatures. Immutable data structures, while nice to reason about sometimes have huge performance downsides. And computers aren't really getting faster in ways that help that much anymore.


"partially applied functions to other functions" - In C# you do this anyway via Fluent interfaces and/or lambda Func<_> arguments. The only difference is in F# any function can be made into a fluent style pipeline (|> operator) due to this partial application whereas in C# the function has to be built specifically for it. Partial application for me is one of the reasons F# code tends to be nicer than C#/Java code there's less need for things like DI containers, and other big frameworks each with their own learning curve and corner cases. It feels more like I'm coding in a static version of JS/Python once you get used to it vs C#/Java which have a lot of ceremony - expect that type inference you mention means those partially completed functions are much less likely to be used wrong.


> that attitude struck me as dismissive not only of the capacity of the average developer to understand fp, but a bit of fp itself as too hard.

It's a self-inflicted criticism. It's non-fpers saying that fp is too hard, not fpers.


Me in 2010: "Functional programming is great because you can write code that always gives the same output for the same input, making it super easy to test, debug and share. The downside is you need to put down those for-loops and start thinking in terms of map and reduce instead."

Programmers in 2020: "Yeah some of the people on the team know how to use flatMap."


Most of the time my C# code uses most of the functional paradigms. I write pure functions when i can and use LINQ almost everywhere. Recently i created a F# API with the dotnet CLI, but the syntax is not even close. I get the concepts, but syntax is a little bit to much to just try it when i have to write a new microservice. Am i the only one?


F# syntax is so much better than C# for functional programming - and arguably for object programming too. However, it is sufficiently different that you really need to jump in with both feet. Don't assume it will be easy because you know C#. However, once you start to grok the syntax, you begin to see how the language features elegantly tie together, and the language design decisions make perfect sense. Unfortunately, you just have to take it on faith that this moment will occur... you can't see it unless you put in the time to learn it.

There is a whole series on C# to F#: https://fsharpforfunandprofit.com/posts/porting-to-csharp-in...


This different is a big thing. People learn C#, Java, JavaScript, Basic, PHP or C/C++ early on their carrier. Hoping between these is hard but not because of syntax. Hoping from this OO gang to a modern functional syntax is brutal.

I think key to proper functional adoption is wide range entry level adoption (and not only university).


Indeed. And sometimes these same people will claim, based on their experiences only with the OO gang, that PL choice doesn't matter! If this is you, set aside an afternoon and try a Lisp or something from the ML family, it might change your whole perspective!


I know OCaml and F# are super similar, but how similar is rust syntax?


(I wrote this without realising that you specifically meant "syntax for functional programming". Sorry.)

Syntax is fairly different but the concepts are broadly similar. Rust has many of the same nice things as F# does, like "if" as an expression rather than a statement, algebraic data types, mutability declared as a keyword, etc. In fact Rust is much stricter with the "declared mutability" thing, because in F# you can use an immutable reference to a mutable object to mutate the object.

The really big paradigm difference I found as an F# person when starting to learn Rust was the lack of guaranteed tail-call optimisation, meaning that Rust kind of wants you to avoid recursive functions. I also find it much more annoying to write sequences ("Iterators") in Rust than in F# (where the `seq` computation expression makes everything ludicrously easy at the cost of some oddly bad performance sometimes). F#'s anonymous interface implementations are also really handy, but Rust's situation there is at least no worse than C#.


Rust is influenced by OCaml but has lots of C++ concepts thrown in too. I don't think it makes sense to be choosing between OCaml/F# and Rust since they are so different. If you need the performance characteristics of Rust, then the choice is made for you (and you should also consider modern C++). If not, then the borrow checker is probably a burden that you don't need.

Maybe I'm missing something. Under what use-cases would someone be choosing between OCaml/F# and Rust?


I don't think I ever would, I just knew they had common heritage and wondered if that was evident to the developer. I've been a professional c++ dveloper for the last 18 years and I've got a couple small projects now that we are using rust for. I know enough to modify rust or fix bugs but not to create an idiomatic design in it. If I don't need the speed I am usually prototyping so I use matlab. If I don't the speed and I'm not prototyping I use python, but that isn't often. I think Julia might end up having a spot in that range of options but c#/java isn't enough faster than python or enough easier to develop than c++/rust for me to consider it unless I need to use existing code.


Rust is kinda like F# with curly braces and no garbage collection. =) There are many more differences than that, but definitely if you know F#, Rust feels very familiar, and vice versa.


I would be very worried using any language developed and marketed by a single large corporation. It’s not only that they or the language might go away suddenly (hello Google!), it’s mostly that this corporation have a de-facto iron grip on the future development of the language and the developer culture.

I simply don’t use any such languages.


Just that F# other than C# is a bad example in that argument. There is a pretty independent F# Foundation with even the lead Microsoft / F# personell advocating it.


Nothing which was initially developed inside a large corporation has a reasonable chance of becoming truly independent. The only example I can think of is Unix and C, and that’s only because AT&T was (for complicated reasons) legally restricted from selling it as a product, and the project was taken over by various other actors in both business and academia.

But C#, F#, Go, Swift, Dart, etc.? They are initially populated by developers from a specific company, and will, from the start, have that company’s culture and goals implicitly ingrained. No outside developer will be able to climb the ladder and gain any appreciable mindshare to significantly affect this; the direction is set from the start and cannot be altered as long as the original developers (or their successors from the same company) are mostly still there. To get back to my example, Unix only really started to thrive once it was taken over by the BSD developers.

(I have written about this before on here. First six years ago (https://news.ycombinator.com/item?id=8733705) and again about two years ago (https://news.ycombinator.com/item?id=18370067).)


- Kotlin (Jetbrains)

- Chez Scheme (Cisco)

- Erlang (Ericsson)

- Rust (Mozilla)

- Java (Sun Microsystems, Oracle)

The above examples show languages that outgrew the companies that developed them.

Typescript might follow a similar path in the future.


I don’t know about Kotlin. Scheme was already a thing before Chez Scheme, and Scheme in general has quite a lot of implementations, so Chez Scheme is probably mostly safe, merely because of that. Erlang I don’t know much about. Rust was created by the semi-large Mozilla Foundation, but Mozilla had mostly benevolent goals and designs for the language, so that one is probably OK. Java was initially made popular since people trusted Sun Microsystems, but then Sun was 1. destroyed by market forces, and 2. bought by Oracle. And Oracle keeps an even tighter iron-fisted grip around Java than ever, according to what I hear. So I wouldn’t touch Java with a 10 foot pole either.


Kafka? Airflow? Javascript?

The list goes on.


The only one of those that I have heard of is Javascript, and that was initially implemented by Netscape, but the language was then taken over by the industry at large as Netscape’s market share dwindled. Javascript is also unusual in that it had a specific and restrictive design goal, and a very short initial development phase before the language was mostly frozen. This meant that not a lot of Netscape culture could be put into the language, partly because the language itself was small, but mostly, I think, because of the very fast initial development by a small team (a single person over a period of days, IIRC).

So, yes, even though I don’t like Javascript as a programming language, I would not shy away from it merely because of its corporate roots.


You are just wrong and speaking like it's still the 90s or 2000s.

I just mentioned 3 very popular and effectively best in class technologies that started at a corporation and now have robust open communities supporting them.


Please name one thing I wrote which is wrong. I mean, I agreed with you about Javascript, and gave a lengthy explanation why, even though you gave no motivation. I have not heard of the other two, so obviously I can’t comment about them.


I find the whole concept of the case to be off. The author is concerned that applications can’t take advantage of the increasing parallelism of computers because of the difficulty in writing fine grained parallel tasks, but misses that 1) the paucity of programs that end up CPU bound and 2) Http requests already provide an excellent threading container within which imperative code runs, meaning that the system takes advantage of parallel computing even if the program author doesn’t (so long as they don’t explicitly write themselves into a problem)

Edit: others on SO already pointed this out.

Also, IMO imperative programming generally maps better to the way business requirements are- First do this, then that, unless you see Y then do that. It’s harder to walk a BA or non technical manager through functional code and stand a chance of them tracking what’s going on.


>It’s harder to walk a BA or non technical manager through functional code and stand a chance of them tracking what’s going on.

Not really. Check out fsharp code and you use domain words to actually model the business case.

This is really old code, but it communicates Fsharp's effectiveness quite well. https://github.com/swlaschin/NDC_London_2013/blob/master/ddd...


> Http requests already provide an excellent threading container

Yes, modelling code around receiving requests and returning responses was a good idea.

> It’s harder to walk a BA or non technical manager through functional code and stand a chance of them tracking what’s going on.

Jane Street adopted OCaml as its main programming language early on because the language's functional programming style and clear expressiveness made it possible for code reviews to be performed by traders who were not programmers, to verify that high-performance code would do what it was intended to do.


"Asked 11 years, 7 months ago"

Has the need to use functional programming languages become obvious yet? Has C/C++ et al. fallen by the wayside, yet?


Pretty much every mainstream language has adopted concepts from FP. C++ has closures. Even Java is getting pattern matching. Everyone has map/fold. ...

Most new languages I can think of have a concept of immutability and sum types. (Kotlin, Swift, Rust)

Many languages are in fact converging on what seems like a local optimum, with a blend of functional and non-functional idioms.

The line has become a lot blurrier, which makes the question hard to answer.


C/C++ have not fallen to the wayside but most major programming languages have now inherited quite a few functional paradigms. I haven't kept up to date, but don't both C++ and Java have lambdas now? Don't they also feature currying? C++ is of course not Haskell, but that's not the point. Functional programming has become commonplace by procedural languages adopting the simpler/most-useful aspects.


If it's not null-safe I will avoid it for new projects.

Lambda's: great! But real null-safety, strong type safety, proper sum types, pattern matching/ type destructuring, are now my requirements.

I'd say Kotlin is the language with the most adoption that ticks the boxes. And it is (not surprisingly) an OO lang.


> I'd say Kotlin is the language with the most adoption that ticks the boxes.

How? Kotlin is decisively an object oriented language.


That's what I mean. It ticks my boxes (of which you find most in FP langs). Though a list of langs ticket the boxes, the language that actually has a serious mainstream following is not FP.


Oh, I misread that completely.


Incidentally, pattern matching that Java is getting is superior and more fully fledged to the one that exists in Kotlin. You can also have null safety in Java by means of annotations.

You can also check out C# since it fits your requirements. It has pattern matching, but sum types are still in the works.


I've grown a strong dislike for annotations. One here or there is not the problem, but they will be used (I'm looking at you Spring) to add magic to my programs. I want to know what goes on; annotation hide what goes on. Debugging problems with annotations is a real pain. I like that in Kotlin the community seems to fight overuse of annotations (also they fight overuse of Exceptions which I really like: Exceptions are not a way to do multiple return values, sumtypes are).



Kotlin pattern matching and destructuring/unapply support are pretty weak, no?


Yes, Java's pattern matching is much more fully fledged than Kotlin's. You can destructure nested types. Also, at some point in the (near) future, you should be able to do things like:

    Object o = new int[]{1, 2};
    if (o instanceof int[] {var a, var b}) {
       // a == 1, b == 2
    }


I don't know about the currying, but Java has already lambda and optionals, and is getting immutable records, sealed class (product and sum types), pattern matching and type inference in Project Amber [1]. They all are features usually found in functional languages (Standard ML for example).

[1]: http://openjdk.java.net/projects/amber/


C maps to the machine well. (Low-level programmers will probably crucify me for that.) But take its lack of closures as an example. Closures require either some kind of memory management, or clever program transformations, at which point you lose the simplicity of just having the stack and heap.

As much as a generation of Java programmers (like myself) grew up thinking that C++ was old hat and Java was its natural successor, C++ has been quietly ahead of Java with regard to lambdas, generics, type-level trickery. Not to mention performance and backward-compatibility [1].

But reading C++ makes my eyes bleed. I much prefer FP for business logic and web-app development. Haskell is fast but it's no C++, and that's OK. I wouldn't want Haskell to take C/C++ market-share. I want it to compete against other languages that don't give me performance, safety or terseness.

[1] C++17 on the Commodore 64: https://www.youtube.com/watch?v=zBkNBP00wJE


Didn't they add lambdas to C?


There is a non-standard compiler extension supporting them for GCC, but that's all.

Also, I seriously hope they won't. The whole point of C in this day and age is extreme simplicity and portability. The latest standard, C17, has only corrections. C11 has really all that's needed for "modern" applications, aka a threading library which is not pthreads and decent unicode support.


our team (C#) did an extended proof of concept in F# about ten years ago. we found that it was easier to hero-code in F# but harder to work as a team over time in F#. we ended up embracing a lot of the functional paradigm in C#, which seems like a very good compromise.


This is a good and understandable observation.

The next question is whether this is from lack of common constructs/conventions by those new to F# or if it would always pervade as seems to be the case with Lisps. e.g. Clojure was specifically made to cover more in its library and syntax to promote standard styles.

Rather than thinking F# didn't/doesn't work, what were the reasons and what could be changed so it did work well?

Maybe some workflow changes like more design discussions before putting up a PR for review, or pair programming with rotating pairs so that conventions emerge. Also over time the codebase itself if it has benefited from convergence would show repeating patterns of adopted conventions.

I see the language itself as unapproachable only by reputation/lineage. In practice it can be on about the level of Elixir which is gaining traction and success stories.


Could it be that you are just observing productivity? It seems plausible that anything which slows things down helps people work on a thing as a team because it's easier to keep up and there is more time and energy available to organise teamwork.


> we found that it was easier to hero-code in F#

What does hero-code mean?


"Hero-coding" is the kind of thing where you alone go into a fugue state for a week, and emerge on Sunday night, covered in blood and with your hair standing on end, holding a crystal redolent with eldritch power.


Here's my stab at it (though I could be wrong). It's like perl, easy for experts to write a lot of it fast, but hard for anyone else to grok what's going on.

As an anecdote, I see it come up with clojure a lot. Some of the clojure devs in my company can spin up a lot of interesting applications and fast. However, trying to read any of there code can be nearly impossible. Why? Usually because of a generous helping of their own metaprogramming constructs that they've built up over time. I've watched them program and it's both interesting and really hard to follow.

The big problem we've ran into is these clojure projects have been really hard for teams to get into. As a result, the worst thing has happened, many of them are being scrapped and rewritten in java because they are too hard to maintain otherwise.


Speculating, but based on context, probably "Cowboy coding" with heroic intent and success..


> easier to hero-code in F# but harder to work as a team

Allowing hero-coding in any language makes it harder to work as a team, so using language that leverages hero output only makes it worse. I wouldn't say that the power of the language is the thing that should be blamed.

I can certainly see that using constructs that the team as a whole isn't ready to adopt being a problem that also comes up in adoption of Scala.


I use it every day! It's a great language, and plays well with C#, and even C++.


GC was probably not a concern in 2009, but these days GC pauses will kill any moderately high throughput app. The stackoverflow blog has lots of content dedicated to the lengths they go to avoid them in C#, such that the resulting C# code doesn't look like idiomatic C# at all.

In F# that would be an even bigger challenge. You could do it, but it would but way ugly. F# is my go-to language for side projects, but for anything server-side that I'm getting paid to work on, I'd be reluctant to suggest F# as a starting point, as much as I'd like to.


Unless you're working on a SOTA HFT application where you're optimising your latency by the nanosecond, I can't see modern .Net/F#/C# being a problem in production. Given that Bing[1] and Azure Active Directory[2] run on .Net Core/.Net 5, which are among the most used applications on the entire internet, the idea that a medium-large project cant use them is ridiculous. This isn't even just with modern .Net aswell, Microsoft Trill has been running trillions of operations a day on single machines since 2012 as a .Net library [3].

[1] - https://www.bing.com/version

[2] - https://devblogs.microsoft.com/dotnet/azure-active-directory...

[3] - https://www.microsoft.com/en-us/research/project/trill/


I used to work in the Azure Resource Manager team and we had big GC issues there. API servers would pause for 200 ms and queues would explode. Workers would grind to a near halt in certain conditions due to GC overload. Granted this was a couple years ago and running dotnet framework, so perhaps things have improved.

I think it's likely most of the serious problems there could have been fixed by smarter in memory caching: a few big objects instead of millions of small ones. However the whole experience left a sour taste in my mouth for overly allocation-happy code styles.


Do you think that a lot of what you experienced was before widespread use of the Span primitive in the .NET libraries and frameworks? Even if you go to great lengths to avoid GC pauses in your own code, the components you depend on may not have done that and then you're toast. Much has changed in the past 1.5-2 years to eliminate almost all unnecessary allocations across the stack.

C# and F# also got compile-time analysis for working with spans, span-like, byrefs, etc. as of 2018 so that you can write some of this code yourself and have a reasonable assurance the the compiler will keep your code from allocating. Coding like this isn't easy either, but it's far simpler than it used to be because of the underlying types an compiler analysis that can help you.


Oh, absolutely. In fact I proposed an effort and wrote up a few prototypes to bring those things into ARM codebase while I was there.

IDK if any of it ever went anywhere though, as I left the team shortly thereafter.


"Granted this was a couple years ago and running dotnet framework, so perhaps things have improved" - I think this explains the differences in our opinions on .Net, as I'm from the opposite side of the spectrum having only worked on .Net Core on Linux. From other discussions I've had with people who used older versions of .Net, they seem to share your opinion that it was nowhere near as pleasant of an experience as something like .Net 5. As the other comment mentions, Span<T> has really permeated through core libraries, and the effect is much smoother, more predictable performance, with low memory usage making container based development a breeze.


While GC can be a factor to work around its also a religious issue (e.g. when C++ devs transition to C# they often fight it as a subjectively perceived foe) which always make it suspicious to put to the top especially in contrast to 2009. What's the difference there? You've still got a server that you're thrashing in both cases. Did something significant change in the GC implementations between 2009 and today that makes it _less_ suitable to use as a backend server?


F# is designed so that the performance ceiling is the same as C#. Certainly you would get less idiomatic to reduce GC pressure, but I'm not sure its harder than c#


I dunno, there are some weird places where idiomatic F# allocates even when you avoid the usual suspects, but can be trivially rewritten not to allocate. (There are even bugs which cause allocation. Try iterating over an array of `unit -> unit`, calling each in turn.)


I have frequently come across 'multi-core' code using fancy features of fancy languages (e.g. Scala) that get a 205% speedup on a 8 core machine when I should be getting more of a 750% speedup, not get the same answers every time, etc.

The choice in that situation is to risk spending a few days debugging (50% at best success rate) or spend 20 minutes and use a ThreadPoolExecutor and you're done.

Java was the first programming language specced out by adults, and early to get a correct memory model and correct concurency primitives. 'correct' is a much better attribute than 'new' or 'shiny'.


-> 'correct' is a much better attribute than 'new' or 'shiny'.

F# first appeared in 2005. I think it is safe to say it is not 'new or shiny' anymore.


The other thing is that most programs are probably doing nothing most of the time. Waiting for user input, waiting for network IO, waiting for some other real-world event.

For most developers, it's probably not using another language so you can "wait faster" most of the time. The performance sensitive code can be scrutinised and properly optimised on a case-by-case basis.


Really the "reason why computers suck in 2021" is that user-interfaces are fundamentally single threaded, and that every time you see a spinning circle or beach ball on your computer, some application dropped the ball and none of the UI will render ever again until some function returns.

Maybe that function is waiting for a signal from Mars or a DNS lookup, or a 3GB file download, or for an O(N^2) algorithm to finish (e.g. does your "no code" tool know when to say "no"?,) but it hangs up everything else.

Web browsers are huge programs written in C++ by the greatest software development organizations and they block the render thread as little as possible, Javascript inherits this property as embedded in the browser and that is why it is so dominant on the front end.

Perhaps some more radical idea like breaking the application up into tasks that are killed in (say) 50ms and can be individually crashed (like Erlang) but going from that idea to conception is a lot of work -- one thing to do for a very simple mobile app, another to do write a framework you could write an IDE like VS Code in.

And that is what I find so irksome about the hand-wringing behind "Why isn't language X so popular?" is that if you really want to be doing something harder than what most people are doing you will need to thread a path from beginning to end and you're going to do that first through mastery of computer science (general) and second doing it through mastering your tools (specific).

There is so much path-dependence everywhere. I have gotten into the Arduino hobby lately. A 1970s computer scientist would think it was insane that it uses C instead of something more like Pascal or PL/I, that it should have a language that respects the Harvard architecture, etc.

They'd be right. C's a terrible language to use for that purpose, except that (1) the developers of Arduino could get a good C compiler and toolset off the shelf, (2) many people know how to program C, (3) you aren't wasting your time learning C, and (4) C isn't that bad, and (5) you can't write that big of a program for an Arduino anyway so you can only get into so much trouble with a "buffer overflows included" language.


> correct concurency primitives

wait? synchronized? notify? notifyall? Fat threads and CompletableFutures that won't cancel? No thank you!

I'll take green threads and transactions over them any day!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: