I've written some threaded ruby code in the past and in addition to what was said in this article I would like to emphasize 2 more points:
1. Every time you have a shared data structure, put a Mutex around it's usage (even in POC examples so people learn the properly). In the reproduction example line 4 you have `ids = Set.new` (the shared data structure) and on line 9 `ids << semaphores['test'].object_id`, since this is within a thread you need to put a Mutex around the push to `ids`. You can still have `semaphores['test'].object_id` outside the Mutex, assigning that to a variable and then push that to `ids` in the synchronize block to achieve the same result. If you have both inside the Mutex it would hide the presented issue.
2. Write stress tests that loop through your multithreaded code multiple times with trivial workloads; 100 times might not be enough to trigger a concurrency issue, you might need tens of thousands, or millions of iterations, or more. So don't be afraid to have the loop size configurable and just run it for a longer period of time, just to make sure. In CI you can tweak the loop size to have it running for let's say 3-5 minutes. That should make it relevant and still keep it fast/cheap enough.
Additionally, ruby now has Ractors, but still marked as experimental. These are amazing as you not only get safe parallelism (not just concurrency, true parallelism), but also data safety as a piece of data is either copied deeply, or ownership is passed to one single Ractor. Fingers crossed we see it stable soon.
I think Java is accidentally on that list, because it's had excellent concurrency stories since it came out, got complicated when they removed "green threads", thereby pushing thread pool management into the programmer's set of responsibilities, but is reworking its former ergonomics with its project loom.
for the rest of the listed languages, definitely agree.
Hard disagree. The java concurrency story was absolutely awful when it came out. It had terrible ideas which panned out as well as checked exceptions did, and got forgotten when the collections were reworked. The only things it had going for it were showing concern and a well defined memory model, but that was broken until JSR 133.
Java concurrency got a lot better with JSR 166, by accreting new APIs. But those require awareness of their existence and purpose, the langage baseline is no better.
Project loom does not fundamentally change any of that. It’s goal is to increase efficiency, aka get more wrong answers faster.
What do you mean by "get more wrong answers faster"? Given that volatile / synchronized have largely gone the way of the dodo, ditto Object.wait etc.
, and that concurrency safe atomics and collections are de rigeur, how do green threads reverse that? (I understand your point that you need to use the good stuff to take advantage of it, and that the old bad stuff is still in the base language, but I feel that's true for pretty much every language that's 30ish years old.)
I spent years working on concurrent Java apps post JSR 166, and the only concurrency issues I met were PEBKAC, and some of the awful things people did with Vert.x and it's event bus.
Do you have a good resource or summary on why volatile / synchronized are problematic and java.util.concurrent should be preferred?
I always remembered the same, but was recently in a conversation with Java devs who contradicted it, and could not articulate well why this is the case. and could not find good resources on it.
Anyone can do "concurrency" by starting from scratch with a fresh process each time. That isn't "having a concurrency story" it's rather the opposite I think. It's very wasteful of system resources, energy, etc.
PHPs model would be okay if they used green threads/goroutines. That way, you can have a far larger number of workers, as most of them are blocked on database connections or HTTP requests to other services.
It has proved successful so far. The shared-nothing, short-lived process avoids classes of issues such as with slow memory leaks, accidentally blocking event loops, and shared memory threading issues.
Most PHP sites will utilise php-fpm, so it isn't really true that each request will spawn another PHP process.
The language historically hasn't been that great, but its shared-nothing architecture has always been the good part.
Yeah. Unless you're already running a big Rails app that's not a good candidate for a rewrite, what's the point? Nothing against Ruby, but just use Elixir/Erlang already.
Rails remains a best in class framework for web application development and ruby (in my opinion) a fantastic programming language (amazing object model + standard library).
Sigh. As an old Rails hand myself - hell, I credit Rails for much of my tech career - I think I now have to disagree. Rails is great, indeed the best, for its use case, which is CRUD apps. That might still be enough. But what is expected of applications has changed over the years and Rails isn't really capable of meeting some of these new expectations, not alone anyway, and developers trying to implement them find themselves reaching for ever more kludges and workarounds and 3rd party software trying to fit, essentially, a square peg into a round hole.
Applications these days - well, a large number of them - need to be realtime. Server push. Websockets. Push notifications. Scheduled jobs. Long running background jobs. Calls into other services. Presence awareness. The list just goes on and on. And sure, you can somehow deal with all of this in Rails - hell, you can do anything in any language given enough time and effort - but you are absolutely going against the grain, and you're not using Rails anymore, you're using Rails + sidekiq + node + some other thing + xyz. I thought Rails was supposed to be the simple option?
Rails still might be the best choice, if you're sure your domain will never need to do anything long-lasting or concurrent. Internal admin apps or simple e-commerce would be good examples. But if it's going to be more than that then Rails might save you some time at the beginning only to bite you badly later on.
Phoenix 1.7 is out now and I basically recommend all rails developers to start learning it. It is going to be a bit of a learning curve, and it's not quite as lovable as ruby and its near-perfect syntax, but it is vastly more capable and is, IMO, the way forward. Frankly, I don't understand why it isn't much, much more popular.
I really have to disagree. My perspective is somebody that did some Rails work >10 years ago, followed the hype/money into Node.js and eventually Go, then ended up working in Rails again ~4 years ago.
In my experience a bog-standard vanilla Rails + Postgres setup provides all of the things you mentioned (except presence awareness, which is pretty tricky).
* Server push. Websockets. Push notifications. => ActionCable
* Scheduled jobs. Long running background jobs. => ActiveJob
* Calls into other services. => Pick your preferred HTTP library (or just use Net::HTTP)
All of the above have been part of Rails for years. The only additional Gem I would add is good_job to be the ActiveJob backend.
Now, if you start to bump up against what vertically scaling Postgres can handle, or you want some of the _additional features_ of a 3rd party dependencies (Redis, Sidekiq, Webpack, etc. etc.) you can easily add them, but it's realllly unnecessary for 99% of apps out there.
All those things you listed require additional background servers, redis, etc. In Elixir/Erlang, it's built into the runtime which is a huge advantage. And even with standard things like handling http requests in Elixir -absolutely smokes Ruby in terms of performance. This leads to real cost savings as well
Those built-ins that you are talking about are not as well rounded as a dedicated framework. These built-ins operate at a different operating model than running additional background servers. This typically means that everyone in the org needs to be an Elixir/Erlang expert instead of having experts in Redis, cloud native, ... For better or worse, this is likely a large obstacle for many orgs.
The argument for using Elixir/Erlang is also more difficult when you have large companies like Github and Shopify demonstrating that Ruby can scale.
EDIT:
Let's not forget that it's mostly the DB that slows CRUD apps down. Not the language or the framework.
No, they don't. You can absolutely run ActionCable with the postgres adapter in your main rails app server and require no additional services.
Edit: this is also why I mentioned GoodJob, it supports the full ActiveJob API (delays, retries, etc), even comes with a nice Web UI, and it only requires Postgres.
> Phoenix Presence is a feature which allows you to register process information on a topic and replicate it transparently across a cluster. It's a combination of both a server-side and client-side library, which makes it simple to implement. A simple use-case would be showing which users are currently online in an application.
> Applications these days - well, a large number of them - need to be realtime
A large number of applications where and for what purpose? I think a large number of applications don't need to be realtime. The majority of applications exist that we never see, or frankly never know exist: SMBs that we've never heard of that are approaching (or have approached) the seven-figure revenue mark.
I do think, though, that a large number of developers have had their perception poisoned by this very crowd: that they need real-time, or a front-end framework, or plans for massive scale because they think they have to build the next Stripe or Twitter or FAANG-scale thing. Many of us, just like many applications, aren't going to scale like that, or hold those jobs.
The internet blossomed without real-time just fine. I think that it'd be just fine without it.
The funny thing is that I basically agree with you 100%. I constantly rail against, and mercilessly tease, companies building with microservices from day 1 or going with low-level high-performance languages like golang or rust before they even have a single customer "because FAANG does it". Monolith until you literally can't anymore. High level until you are forced to use something lower.
I did say that Rails is perfectly adequate for certain classes of applications. I guess the point I was trying to make, poorly as it turns out, is that for consumer applications at least, Rails is no longer the "sweet spot" and you hit up against its limitations earlier than ever and will, not might, will be forced to deploy ever more complex workarounds for basic functionality that you get out of the box in something like Phoenix. OK, forget websockets. How about scheduling a daily summary email? Daily reports? Anything other than a build-the-world, serve-request, tear-down-the-world HTTP query? Now you're running some separate thing and boom, there goes the simplicity.
I get you. I'm the "use boring tools" guy as well. But the tools have to be actually capable of doing the job, and the job has changed, well the kind of things I seem to be involved with have changed, and the Rails productivity "edge" lasts weeks at best.
And they're all inferior to what Elixir can do out of the box without ductaping on 3rd party tools. Running other services like Redis is not only not a breeze, it's an additional expenses and thing to monitor.
I assume most big Elixir apps use Redis or something similar anyway for its speed and reliability (it can periodcally persist for instance) instead of saving everything in memory. Redis is widely used with a huge community support and you usually don't want to lose all your background job info (you don't have to use an actual DB, but Redis seems like a good compromise for many companies). If you don't want Redis there are background job gems for Postgres/MySQL etc.
As for monitoring - in our current micro service world adding or removing Redis is peanuts. You have so much stuff to monitor you need strong monitoring, and usually a whole team dedicated to set it up. Doing that for Ruby or Elixir is negligble. I'd even say Ruby is more straightforward to monitor for devops people than Elixir (that's from stories I've read, not an expert on that).
There's plenty of real issues with Ruby (and with Elixir), but what you're arguing here is simply non issues imo.
Fair points regarding eventually using redis; it is common. But we haven't had to reach for it yet and handle some light caching and background jobs right in Elixir.
As for monitoring, it's better than it used to be - Elixir's Telemetry library is pretty awesome. There's even some UIs built for it:
Ah yeah, was generalising somewhat. You're right, it's perhaps a "mid" level language. Maybe even high level in areas it is intended for, such as channel tooling, etc. Regardless, it's much more verbose than true high-level languages such as Ruby and I would not consider it a good choice for a startup unless they were specifically writing actual infrastructure code.
Regardless - it's certainly implicated in the cargo cult of "dozens of golang microservices all talking to each other in a combinatory explosion of GRPC" antipattern i've seen startups succumb to before. One of them ran out of runway with less than 10 actual customers, after spending 18 months building an MVP that would "scale".
Maybe I can propose a new law: "If you have more microservices than you have customers, you are scaling prematurely".
> How about scheduling a daily summary email? Daily reports?
Forgive me if I'm wrong (I don't know Phoenix that well), but don't you need some external library like Exq do perform background jobs? How is Phoenix+Exq different from Rails+Sidekiq?
You don't need to run a jobs server, the language itself handles the processes. Most people use a library but the work still happens in the language runtime. And you can even build your own on top of supervisors, genserver, etc.
Frankly, I don't understand why it isn't much, much more popular.
It's been a few years since I walked away from Elixir and Phoenix for recreational projects so I've forgotten the finer points. The two things that bothered me the most were that a.) Erlang treats BSD as a second class citizen and b.) I got the sense that there was a lot of cargo culting going on. Getting an informed answer felt like it was just that much more difficult than with say Ruby.
Professionally, as an ops monkey, I wouldn't want to be on the hook for supporting an Erlang or Elixir app. There's definitely a chicken-egg problem and I'd worry about finding coworkers who would be comfortable with Elixir, but there are also simply far too many moving parts. Like. Yeah okay channels are cool, in-place upgrades are cool, and a well disciplined team could make good use of them but to me that all sounds like a lot of very tempting footguns. All of a sudden I'm not just supporting an app, I'm supporting an entire runtime on top of Linux. I'd much rather deal with a single binary like e.g. go or rust provide, and I'd much rather not deal with Erlang processes and whatnot.
That said the language itself is great. At the time I started dicking around with Phoenix I was working with a guy who was making a big push to use clojure for internal tooling. I liked the Elixir syntax which felt like a great mashup of Ruby and Clojure.
> All of a sudden I'm not just supporting an app, I'm supporting an entire runtime on top of Linux. I'd much rather deal with a single binary like e.g. go or rust provide, and I'd much rather not deal with Erlang processes and whatnot
(Not trying to be argumentative.)
We have been quite ok just deploying our elixir stuff as containers per anything else (seems this is even common for go services - even though just the binary is enough theoretically). Connecting a cluster of beam-containers is not really any different to a cluster of go-containers, etc. Possibly when you last used Phoenix it didn't have built in releases (portable, compiled packaged dir + binary) - and a Dockerfile generator.
I likely lack the perspective you have, but I think you can discard a lot of the beams "classical" deployment story for something that's no different to Rails/Go/etc and I don't think you lose anything besides hot-code-upgrades which I think aren't really that needed in todays multi-node infrastructure. A container going down isn't any different to a raw beam node going down so its much-for-muchness.
Elixir has better concurrency we get it.
It also has a tiny market share and other issues of its own (steep learning curve, functional programming, having to dive into Erlang sometimes etc etc).
By choosing Elixir you trade one problem with a bunch of other problems and most CTOs don't really care for the tradeoff Elixir offers. Community, learning curve and availability of libraries are way more important than how many pods you have to run.
I'd go with Python/Java/PHP/Ruby/Node and many other stacks before going with Elixir and I think many people agree with me.
Subjective, but I don't think it's fair to say Elixir has a steep learning curve. It's quite readable, well documented, and has a good Repl so you can experiment. You can even view documentation in the Repl.
It might depend which languages you already know, but if you've done modern JavaScript a lot of the function elements will be familiar.
When I first started learning Rails at a company, my ex-boss hired me even if I had 0 experience in RoR, 0 industrial exp in web development. I came from a c++ guy. So jobs were plenty. Now companies who want to hire Elixir devs don't have that mindset, none! (practically), this is why elixir adoption stuck.
Well, 3rd party ecosystem is minuscule in Elixir compared to Ruby. When you want to iterate fast but get bogged down by missing/unsupported libraries or having to learn Erlang to use its libraries with Elixir etc. Lot of services are also not publishing official SDKs for Elixir but do for Ruby etc. The list goes on.
There's more to a non-trivial project than what the language itself provides.
The BEAM is awesome, but both Elixir and Erlang are not really great for concurrency. They lack support for pure functional programming, which is really one of the most useful things in this area.
I've been doing Elixir professionally for 6 years. There's a whole class of problems that are best solved with concurrent read/write access to data. So the answer to your question is exactly that: when you want N-cores to work on the same piece of data.
Using Elixir / Erlang on the BEAM is great for concurrency - but the programming languages themselves are not. It's an important distinction, because I believe that there is still a lot of potential for those languages.
What is it about pure functional that makes concurrency much easier? Is there a code sample that would illustrate it?
I've done both Haskell and Erlang/Elixir and yet I don't really see what you're referring to concretely. There was hope that automatic parallelization would be a big win for pure functional, but I don't think it's really worked out in practice, because of the overhead and difficulty of predicting if parallelizing something would make it faster or slower.
It's not about performance as much as correctness and easiness to understand code. Functional programming removes the need to simulate (local) state in your head when trying to understand what code does. And in the places where it is inevitable, it makes it explicit and leads to a design where it's easy to track why and how things got changed.
For example, .map and .filter are now in almost every language, because they are much easier than for-loops. You see them and you know "aha, this collection will be transformed, the number of elements stays the same" or "this collection will be filtered and the elements will look the same, but some might be gone and no new ones will have been added".
Pure functional programming is similar, just that the scope is suddenly "the whole runtime/machine" instead of "a small piece of code that somehow does a loop".
If I understand correctly, you're saying Erlang/Elixir code would be easier to understand and work with if functions that did I/O or message passing etc. were labeled as such, via type info.
I don't entirely disagree, but I also don't think it would be a game changer for a couple reasons...
- It's already well established practice to separate pure and impure modules in Erlang/Elixir projects, even without a type system to enforce it. Take Ecto for instance, it has a clean split of side-effects (Ecto.Repo) and pure logic (Ecto.Changeset). Very different from things like ActiveRecord or Django ORM. [1]
- BEAM programs (and libraries) use a lot of concurrency and message passing, and I think you would have to use an escape hatch ala unsafePerformIO more often than a typical Haskell program, otherwise the IO would "infect" most of the code. Things like instrumentation, fetching app environment, calling the code server, using process dict...
Joe Armstrong kind of mentions the latter problem in his thesis [2]:
Notice that I have chosen a particularly simple definition of “dirty.” At first sight it might appear that it would be better to recursively define a module as being dirty if any function in the module calls a “dangerous” BIF or a dirty function in another module. Unfortunately with such a definition virtually every module in the system would be classified as dirty.
The reason for this is that if you compute the transitive closure of all functions calls exported from a particular module, the transitive closure will include virtually every module in the system. The reason why the transitive closure is so large is due to “leakage” which occurs from many of the modules in the Erlang libraries.
We take the simplifying view that all modules are well-written and tested, and that if they do contain side-effects, that the module has been written in such a way so that the side effects do not leak out from the module to adversely affect code which calls the module.
He is of course talking about "dirty" _modules_. Working at the function level, the leakage wouldn't be quite as bad... but I think it may still be enough to limit how useful IO annotations would be. Code that "has been written in such a way so that the side effects do not leak out" is quite common on BEAM.
Who knows though. Maybe we'll see interesting things in Gleam in the future :)
> If I understand correctly, you're saying Erlang/Elixir code would be easier to understand and work with if functions that did I/O or message passing etc. were labeled as such, via type info.
No, not quite. Pure functional programming is a specific style or paradigm. It's about writing referential transparent expressions only. Tagging something is "impure" is absolutely not the same, even though doing so and/or separating pure and impure functions is a good first step on the way to pure functional programming.
> Who knows though. Maybe we'll see interesting things in Gleam in the future :)
Would be nice and might make me switch ecosystems. As of now, there don't seem to be concrete plans:
> Yes, Gleam is an impure functional language like OCaml or Erlang. Impure actions like reading to files and printing to the console is possible without special handling.
> We may later introduce an effects system for identifying and tracking any impure code in a Gleam application, though this is still an area of research.
Speaking of important distinctions, concurrency isn't a property of a language, but rather a property of a runtime. It's the old procedure vs process distinction. Now a language can have nondeterministic constructs that beg for a concurrent runtime, but that doesn't actually require it. After all, a nondeterministic construct can be implemented deterministically. If the language doesn't specify the order the nondeterministic cases are handled in, then the runtime is free to handle them in some deterministic way.
Incidentally that's the root of the misconception that pure functional languages are easier to reason about. It's operational reasoning that's problematic, because simulating a nontrivial state machine in one's mind is a great cognitive challenge for those of us who are less bright than John von Neumann. However, the Dutch basically solved this problem and provided a logical framework for reasoning about imperative programs in a way based off the program text, with no need for attempting to mentally simulate the probably infinite set of possible processes it describes.
> However, the Dutch basically solved this problem and provided a logical framework for reasoning about imperative programs in a way based off the program text, with no need for attempting to mentally simulate the probably infinite set of possible processes it describes.
Do you mind clarifying what the framework is that you mean? A pointer would be helpful for me.
> Speaking of important distinctions, concurrency isn't a property of a language, but rather a property of a runtime.
But the way of how you can and will describe how the runtime should execute code in a concurrent way is highly dependant on the language. Compare Assembly and Go or Rust or Erlang. Completely differnt worlds, don't you think so?
> Incidentally that's the root of the misconception that pure functional languages are easier to reason about.
It's not a misconception. People are different, but at least I can say after many years of experience in both worlds that pure functional programming is much much more productive than any kind of other approach that I have tried when it comes to dealing with concurrency, parallelism and effects.
I hear you, but you’re missing the point. The sole reason why you find pure functional programs easier to reason about is because that is one strategy for avoiding operational reasoning. However it’s equally possible to avoid operational reasoning about imperative programs. Hence my claim that it’s a misconception.
That's like saying "more important than food is having a full stomach". Pure functional programming is a tool (or paradigm) that makes the things you mentioned extremely easy and safe to implement and extend. It focusses on the local runtime though, not on distributed systems (where the BEAM comes in) - which is why they don't contradict themselves.
BEAM is not a pure functional language system. There's lots of hidden state: message passing, IO, process dictionary, ets tables, counters, sockets, NIFs, etc.
Your original statement lacks any nuance except "E are bad because no pure functional programming". Any attempts to clarify resolve to basic "it's for the good of the whole program".
Whereas easy-to-use concurrency primitives, effortless parallelisation and concurrency, and even the most basic stuff like ability to put a logging statement anywhere in the code without re-engineering half of your program trump whatever imagined advantages of pure functional programming you may come up with. Any day of the week, and twice on Fridays.
In theory, theory beats practice. But in practice...
I did not intend to write an essay of why PFP would be great to have in Elixir/Erlang. I'm just posting my opinion here and I'm happy to explain it as you can see from my answers.
> ability to put a logging statement anywhere in the code without re-engineering half of your program
Let me ask you a question: does it change the semantics of your (whole) program if that logline, that you are talking about, is not executed for some reason - or if it is executed more than once?
If your answer is "it doesn't really matter, might at most be a bit annoying but it's just a log, no stakeholder of the program can ever notice" then there is no problem with putting this logline into the program. No need to re-engineer anything.
If, however, this logline is critical and will be e.g. parsed and used by another system and actions might be taken due to it, then I would argue it is good if you are forced to consider the potential impacts to your program. If that means that you need to re-engineer half your program then there is a good reason for that, since the potential impact is huge. Such a thing has never happened to me in many years while working on different kind of applications. Sometimes a couple of 100 lines need to be rewritten - that is the max that I ever had to do. And indeed sometimes this rewrite in fact caused me to find and resolve problems that would have otherwise been introduce by accident.
Because yes, it's a fact that in a "pure functional program" you need to re-engineer half of the program if you need to put the log somewhere where it's "oh so pure", and where you didn't need logging before [1]. Or thread IO through the entire program to begin with.
> Sometimes a couple of 100 lines need to be rewritten - that is the max that I ever had to do.
Where in a pragmatical language you just add `Logger.log` or equivalent
[1] Spare me the pontification of "if you need logging, you're doing something wrong, this must be covered by tests or type systems". There are things you must log like metrics, audit logging, tracing values through the system, and it is a 100% certainty that you will add logs to places where no logging existed before.
> it's a fact that in a "pure functional program" you need to re-engineer half of the program if you need to put the log somewhere where it's "oh so pure", and where you didn't need logging before
It's most certainly not a fact. It's trivial to insert Debug.trace, for example.
You have completed ignored my question and the explanation of why I asked. I have to assume you just want to rant a bit here. Sorry for your bad experiences, but they are hardly representative.
> You have completed ignored my question and the explanation of why I asked
I did not. I wrote that it amounts to demagoguery.
The reason is simple: a pragmatic language lets you write a single `Logger.log` line without pseudo philosophical discussions on the semantics of a program and "sometimes re-writing 100 lines of code".
It's no wonder any discussion on "how to do logging in Haskell", for example, devolves into discussing the merits of various types of monads and "composable co-monadic contravariants" with 15 equally cumbersome ways of using them.
Not that I'm a fan of Haskell, but I believe that if this log is critical to the behavior of the program AND it is inserted in a place where important effects have previously not been expected (such as pure mathematical calculations) then it is a good thing that the language makes you aware of that. Someone might for example be caching/memorizing those calculations and suddenly your log isn't always executed and that might be a bug.
I much rather prefer to evaluate the impact in advance rather than having to find and figure it in production.
> I much rather prefer to evaluate the impact in advance rather than having to find and figure it in production.
See, this is exactly the zealotry and demagoguery I am talking about.
99% of use cases: we need to add a single line of logging here
Pure functional programming cultists: first we must consider the semantics of the program and the implication of logging on the grand scheme of things. Consider the criticality of a log line. What is a log line? ... <two hours later> ... an lo, once you've done the refactoring to consider the co-variants ... <another two hours later>
Edit. I'll reiterate:
Easy-to-use concurrency primitives, effortless parallelisation and concurrency, and even the most basic stuff like ability to put a logging statement anywhere in the code without re-engineering half of your program trump whatever imagined advantages of pure functional programming you may come up with. Any day of the week, and twice on Fridays.
If you need hours to figure out if a log line is relevant or not, then you have much more important problems to take care of.
For example, if this is a log line for audit logging in an enterprise product then it's clearly relevant. If this is a trace log line in case you need to debug some minor issue, then that's a different story. Has nothing to do with zealotry, just common sense. If you don't understand that there is a difference between those two cases then the discussion ends for me here.
> If you need hours to figure out if a log line is relevant or not, then you have much more important problems to take care of.
I don't.
> Has nothing to do with zealotry, just common sense.
Ah yes. "does it change the semantics of your (whole) program if that logline, that you are talking about, is not executed for some reason - or if it is executed more than once?" vs. "this logline is critical and will be e.g. parsed and used by another system and actions might be taken due to it, then I would argue it is good if you are forced to consider the potential impacts to your program. "
etc. etc.
And yet the fact is that if you need to log something your precious "mah purity" function is doing, you're stuck with "sometimes a couple of 100 lines need to be rewritten".
Where as non-cultists just do a `Log.info` etc.
If you can't understand that, well :shrug:
Edit. I just re-read that inane pseudo philosphical bullshit about "the semantics of the whole program a log line". No, I definitely don't need to consider the semantics of the whole program to add a bloody log line.
I think you misunderstood me. I said "They lack support for pure functional programming" which does not imply that they should force the developer into using it.
Basics for pure functional programming support would be some kind of IO-effect-type, tail-call-optimization (Elixir supports that), syntactic sugar for IO-combinators (to build up the "concurrent execution-plan" without actually executing it).
Optimally also stdlib functions and an ecosystem that help. Those can be rebuilt, but most people consider them tightly related to the language, so without it, it couldn't probably be called great support.
I haven’t found anything like Rails when it comes to developer ergonomics.
I like typed languages and I feel safer with them but there are things in Rails I haven’t seen in other Frameworks before like the webconsole that instantiates a Repl in the browser with the context of the loaded page.
Thread safety in Rails is a total pain in the ass. So many libraries out there are not thread safe. I have worked with many large rails codebases and there always are gems that somehow work their way in with major threading issues. I think its a legitimate strategy to run rails apps single threaded and multi process. Not necessarily for all apps or workloads of course.
Multiprocess is more often than not better for latency anyway. Otherwise a big serialization or other cpu clog will block up the other, potentially quick, requests on the same process.
I honestly don't understand why Elixir and the Erlang/BEAM ecosystem haven't taken over the software world more completely. It is inconceivable to me to begin a new web app using Ruby on Rails when I could use Elixir and Phoenix.
1. Once I got the hang of the functional programming paradigm it suddenly felt like a huge weight off my shoulders. I love never thinking about objects and the OO approach.
2. Concurrency is built in so thoroughly that there is an entire framework called OTP that provides powerful tools and supervisor processes and so forth to really take concurrency to the next level.
3. Once I got the hang of OTP, similar to when I got the hang of functional programming, it felt like a huge weight off my shoulders again. OTP makes concurrency so easy and lightweight that you think nothing of spinning up three dozen processes just to handle some trivial calculation or data transformation -- processes are like objects that are actually alive and useful and doing things, and OTP is like an orchestrator.
It just feels to me like everything is clearer, simpler, and more robust when I use Elixir and the BEAM/OTP ecosystem.
Also, writing a web application in Elixir means writing a program that you then fire up and which stays up until you bring it down or it crashes for some reason. In Rails on the other hand, you write a script that is fired up every time an http request arrives, builds it's entire context for everything, handles the request, and then dies split second later, only to start the whole process over again a split second after that. There is a certain simplicity in only firing things up once and then just keeping your context alive in perpetuity.
> I honestly don't understand why Elixir and the Erlang/BEAM ecosystem haven't taken over the software world more completely
I thought similar things in 2016 when I started learning Elixir and Phoenix but after 2 years, building multiple apps, dabbling with it for years afterwards (up until about last year) and giving it everything I had (and more). It wasn't for me.
The conclusion I came to was Elixir makes certain things that are hard in other languages easier and there's certain things in other languages that are simple but much different in Elixir. For the apps I build (typical web apps like GitHub, etc.) I found that I ran into more scenarios where Elixir as a language made it harder for me to build the things I wanted. For things that Elixir made easier, other languages have "good enough" solutions.
If the goal of all of this is to build things to provide value for others, choosing a stack with a larger ecosystem of tools and community support wins in my book when you've reached the point of "good enough" in other things. Responding back in <= 100ms for the p95 case is "good enough" and most web tech stacks can do this without a huge amount of server costs or extra effort if you're talking about million dollar business sized apps where you might be dealing with a few hundred thousand visitors a month for a typical SAAS app.
Things like Hotwire[0] are also tech stack agnostic. You can build really nice feeling apps with minimal effort. I spent about a week upgrading one of my apps to use it while learning as I went and it made a massive difference. Conceptually it was low effort to understand and didn't require rewriting everything. That has been the polar opposite experience I've had when I used LiveView.
According to the MATZ: Concurrency is not the problem if you ask me, We can handle it. The issue is in the read and write of database, the time spend there is the bottleneck.
I think you may be referring to Matz, creator of Ruby, who has made a stance against strong types in Ruby. We do have Sorbet, but that doesn’t go far enough for strong type enthusiasts. DHH is only a secondary voice in this area.
Typing in Ruby will be an optional thing. You can already do it now. It’s much like opting in to typescript. If rails won’t add the types, it’s going to hold back typed ruby.
Yes that’s why everybody collectively moved on to static types exclusively 30 years ago and we all lived happily ever after because there was never a point where dynamic typing made more sense or was anything but inferior. The end.
My theory is that types were the baby thrown out with the bathwater. Dynamic languages changed a whole lot more than just types. The older typed languages were a huge pain to work with, dynamic languages fixed a lot of that and added standard libraries that do a huge amount of work for you.
Now TypeScript and Rust are showing typed languages can be just as productive, if not more productive than Ruby/Python/JS.
I don’t buy it. Static languages have certainly become more ergonomic, but if the evidence was really so overwhelmingly clear, everybody would have switched by now. To wit, regarding your comment about Rust, I just read this article earlier today: https://mdwdotla.medium.com/using-rust-at-a-startup-a-cautio...
> Static languages have certainly become more ergonomic, but if the evidence was really so overwhelmingly clear, everybody would have switched by now.
Change happens slowly in our industry, but Ruby very much feels in decline. More telling IMO is that every major new language to emerge this millennium has been typed.
TypeScript seems to have by-and-large "won" in the JavaScript ecosystem; I can think of exactly one senior+ developer I've run into in the last few years who's expressed a preference for JS over TS and, similarly, I can think of only one greenfield purely-JavaScript project I've seen started in the last three years; that project was built in that way largely out of ideological objection to having a build pipeline. (I think it was a mistake.) It isn't a majority yet, but it seems inevitable that the future of JavaScript among non-laggard populations is TypeScript or whatever will supplant it--and whatever will supplant it will not discard types.
As for Python, adoption is slower, but for me at least, Python typings are becoming the reason to consider using Python in 2022 and I won't use a library without them. And the worst part of Elixir, from where I stand, is that typespecs are bitterly underused. I have some hopes for something like Gleam to make a backwards impact upon Elixir, because my experience with development teams trying to bang on Elixir services without it is pretty grim.
Typescript won huh? You're using your own anecdotal experience to support your statement.
I'm looking at random Javascript questions on Stackoverflow now - there's a gaziilion of them from the last hour. None of them is Typescript. So plenty of people still write JS.
I'm sure there are plenty of projects created in JavaScript. I'm not sure there are plenty of projects created in JavaScript in what I think we can all generally agree is leading-edge to mainline tech.
There will always be laggards and there will always be novices, but it is so incredibly rare to see a library without typings or a senior developer not start with TypeScript almost without thinking that I think it is at worst only a matter of time.
As an aside, why are all of your posts written so combatively? I looked at your most recent page of comments and if I'd not already responded to you, I wouldn't have. I certainly don't shy away from being pointy, but not about things as ultimately meaningless as programming languages.
I'll stay on topic. If I was somewhat combative towards you it is because you made very big statement which in my opinion are simply inaccurate. My tone was somewhat harsh but your comments are in my view simply wrong. They also give the air of a somewhat condescending know it all (of course Typescript won!) which is a bit childish and can be very annoying to read, and this is going on quite a lot here with many other people who've seen the tech light and now need to hammer it into us commoners heads.
Anyway looks like we've beat this horse to the death.
>>> Just about everyone has switched from JS to TS
You are mistaken. While TS is popular it's nowhere near as popular as JS. In fact, just looking at online community counts, I'd say it's 1/2 as popular. Will it continue to gain? I'll never bet against JS.
I don’t think that’s correct and you’re reading too much into popular blog posts (“we tried typescript and decided it wasn’t for us” doesn’t make for a good article). I don’t have any hard data either, but since we’re swapping anecdotes it’s my impression that typed pythons uptake has been very meh outside of multi-million line codebases (and is basically not seen in ML work!), and typed racket and typed clojure are basically unused in practice
Problem is switching programming languages is a big deal. It is a big bet to take. Static typing is probably not enough of an advantage alone but has to be taken into consideration with all the other tradeoffs.
By what measure of "productive"? If you mean developer productivity Rust is, by the admission of its own community, the slowest development environment. Rust is great where memory safety is paramount but it doesn't compete in the same playing field as Rails for CRUD apps.
I would say that with Rust you can be "more" productive by deploying apps that requires less maintenance as for example Ruby or JS app, also a pain with JS/Ruby is updating dependencies, that's a very scary thing to do if you are far behind, while with Rust you can update and instantly see where your code broke.
But I do agree that with Rails you can have quickly a more functional app, but that will be sooner or later not true anymore.
(I might be very wrong as I don't have time to check every statement):
Actually the theory of static typing appeared _after_ the theory of dynamic typing.
So the history seems to be:
1. dynamic types
2. static types
Thus you can say some people didn' get the memo that we tried static typing and it not a panacea. It fits very well some cases and it does not fit some other cases.
Dynamic typing is here to stay, so is static typing.
And when referring to Ruby here is how to think about it:
Ruby is strongly typed, but it does not use static type checking thus it is defined as dynamically typed.
1. Every time you have a shared data structure, put a Mutex around it's usage (even in POC examples so people learn the properly). In the reproduction example line 4 you have `ids = Set.new` (the shared data structure) and on line 9 `ids << semaphores['test'].object_id`, since this is within a thread you need to put a Mutex around the push to `ids`. You can still have `semaphores['test'].object_id` outside the Mutex, assigning that to a variable and then push that to `ids` in the synchronize block to achieve the same result. If you have both inside the Mutex it would hide the presented issue.
2. Write stress tests that loop through your multithreaded code multiple times with trivial workloads; 100 times might not be enough to trigger a concurrency issue, you might need tens of thousands, or millions of iterations, or more. So don't be afraid to have the loop size configurable and just run it for a longer period of time, just to make sure. In CI you can tweak the loop size to have it running for let's say 3-5 minutes. That should make it relevant and still keep it fast/cheap enough.
Additionally, ruby now has Ractors, but still marked as experimental. These are amazing as you not only get safe parallelism (not just concurrency, true parallelism), but also data safety as a piece of data is either copied deeply, or ownership is passed to one single Ractor. Fingers crossed we see it stable soon.