Hacker News new | past | comments | ask | show | jobs | submit login
Where is Ruby Headed in 2021? (bignerdranch.com)
299 points by thunderbong on Nov 19, 2021 | hide | past | favorite | 349 comments

This is really cool. I’ve used a number of languages and dabbled in a few frameworks but nothing I’ve used brings me joy like ruby does.

I approach programming creatively. I think in large systems and architecture. Ruby allows me to skip worrying about the details. Code blocks abstract away thinking about loops and just focus on data. Just about every array operation I could want is there, waiting for a code block. In other languages, things don’t flow like that.

I’ve tried purely function languages and I just can’t wrap my head around them because they abstract too much.

I’ve tried python, but I end up having to deal with the mess of importing modules and having to set up loops when all I want to do is map data.

Node.js comes so close but anemic standard library and absolute mess of explicit async everywhere just gets in the way. I want to love JavaScript. :(

Then there is Rails. Nothing even comes close. The attempts to replicate it in other languages ends up missing part of what makes it so great. I’m sad about the move to mounds of JavaScript. Nothing can match the productivity of simple_form and bootstrap.

My entire career has been in Ruby on Rails. I hope improvements to Ruby can keep it relevant. If not, I suppose JavaScript will do.

I feel very similarly with Ruby and especially Rails. After working in Rails for multiple years, I'm now doing a freelance gig where the tech stack is React, NodeJS with TypeScript on Firebase Cloud Functions, and it almost sucks out all of the joy of programming.

Making software in Rails feels creative, and the act of writing code in itself is very satisfactory.

A good opportunity to re-read the Rails Doctrine: https://rubyonrails.org/doctrine

Now, I know that this is very subjective and I can see how others might prefer other languages or frameworks, so I'm not adding this here to start a discussion. Just to share my feelings :-)

I had this exact realization getting ramped up in a large rails codebase: ruby is optimized for experience of the code author, while most of the things people miss from it (explicit imports, static types) help with code maintenance and codebase scalability.

The small/large project distinction makes sense to me. I think there’s a clear point in project size where the joy of writing code has to be deprioritized behind maintainability and simplicity, even if it becomes less fun to write code that way.

It's not a pleasant experience to come on board and deal with a typeless, styleless, inconsistent mess. And that's what Ruby on Rails will create, unless you spend an inordinate amount of effort on adding back all the tooling you need to make functional software in large groups.

Even then - there's no real path to a mature and maintainable language/framework. In my area (Atlanta) we're seeing a glut of RoR devs available because a lot of the large employers have given up and shut down the RoRs projects in favor of anything more suitable to teams in the 15 to 500 engineer range. Usually this has been typescript/java/.net.

Even within the RoR guys I know - a LOT of them are trying to get out and work with something else (several folks I know are moving to elixir)

> a LOT of them are trying to get out and work with something else (several folks I know are moving to elixir)

Dumb move, Elixir has around 1000 job openings in the U.S while Ruby has 35000. In fact I find it pretty hard to believe they can't get a Ruby job in the U.S, and they choose Elixir from all stacks as a solution to their problem. Sounds to me like they just wanna try Elixir and can't really build a good argument in their head on why they're doing it. Jobs-wise things are definitely not gonna be easier for them with Elixir.

I agree. Elixir is just too obscure.

>Even then - there's no real path to a mature and maintainable language/framework.

Hopefully that is what Github and Shopify brings to the table. Instead of just using Rails they are now actively shaping its future direction.

>because a lot of the large employers have given up and shut down the RoRs projects

My impression is that the vast majority of Rails communities don't care much about its ecosystem growth. They are just happily using it.

This has been my experience. I gave into learning Ruby and how things can be written several different ways. Even when tools like Rubocop and the like are used it still can result in a codebase that is very difficult to use.

My opinion, typed/tooled is useful when the desired solution/behavior/concept is complex.

Elixir is also dynamically typed but due to its pattern matching and immutability, I've found it eliminates about 90% of the issues that I've found with dynamic typing in other languages.

It's still got most of what made Ruby a joy to work with and several unique advantages, too!

Lots of people complain about Elixir's lack of types and possible incomprehensible mess caused by Macros. It kinda proves to me that everyone has different tastes in software and that's about it, there is no objective truth to any of it. For me Java code is a nightmare.

> I think there’s a clear point in project size where the joy of writing code has to be deprioritized behind maintainability and simplicity, even if it becomes less fun to write code that way.

I completely disagree. Joy of writing code must always be a top priority, also in large codebases. There are enough examples of where Ruby (on Rails) is used in large codebases with GitHub, Shopify, etc.

I guess it's a matter of personal match, too. Some people match better with TypeScript, others better with Rails. Nothing wrong about that.

I'm aware that they use a heavily modified version of Rails, but it still is Rails in the fundament.

Gitlab, Discourse, Mastodon, Forem (dev.to) etc etc. All are big OSS software and are maintainable and seem to have quite a high velocity feature development. If it was an unmaintainable mess the whole opens source thing wouldn't have worked as well as it does for them. So maybe we should steer the debate towards - it CAN be done (as you can see) but it's more difficult with Rails (which I think is false).

Well, sometimes joyfully written code can be less than joyful for those who have to read & maintain it…

> I’ve tried python, but I end up having to deal with the mess of importing modules

Without those imports, how will people reading the code find the definitions? How will they track down the source code for all the methods being called?

If the answer is "with an IDE / tools", then can't those same tools be used to solve the "mess of importing modules"? If you're using a good IDE, imports and go-to-definition are both solved problems, no matter what language you're using.

If you're not using an IDE, that's a more interesting situation, and without an IDE I much prefer having the explicit imports so that I at least know where to start digging.

The often impossibility of finding the definition of the methods I'm calling is the main thing that soured me on Rails. I think a more explicit style is more common nowadays, which is a good thing.

Maybe you already know it, but I find `something.method(:mystery_method).source_location` points me to the right place more often than not. I only mention it because I have noticed a lot of people work productively in Ruby without leveraging it's dynamic/reflective features[0], but little idioms like the above are absolute lifesavers for me.

[0]: A big disadvantage of the above is it's not statically resolvable, so you couldn't really build an IDE go-to-definition with it.

The two tools I'd recommend for navigating ruby code in addition to #source_location are Solargraph[1] and, my favourite, good 'ol exuberant ctags[2].

Solargraph is an amazing project but ctags is still just so fast and low overhead that it's what I mostly use.

ctag support is available for most editors and is built into Vim. You need to first generate a tag file (or have an editor plugin do it for you[3]) and then press Ctrl-] to go to definition.

1. https://solargraph.org/

2. http://ctags.sourceforge.net/

3. https://github.com/tpope/gem-ctags

And don't forget the option of just calling into pry at the line in question and typing ls and show-doc. Also ripgrep is incredibly useful for navigating new codebases.

Yes, grep + debugger / pry were how I solved this when I was still writing lots of Ruby. I find it much more productive to be able to get these answers statically, but I was too strong when I said "impossible"; there's always some way to find the right code, or the VM wouldn't be able to run it :)

Yes, you’re quite right! Thank you again, I must remember to use pry more.

With ruby 3.1 I think we’re getting a new default debug.rb gem.

It will be interesting to see if pry or the beat ideas from pry can be integrated with that.

This magic is delightful in small codebases I’m familiar with and absolute hell in large codebases I’m not familiar with

Well put. Totally my view of it.

I've worked with Ruby in the past and I currently work with Python and Elixir. This is a caveat that Elixir inherited from Ruby. In Elixir there's the `only` option of `import`, but it doesn't seem idiomatic to use it. In Python there's `import *` but it's an actively discouraged practice whereas in Ruby and Elixir it's the norm. I like all three but on this particular point I think Python got it right.

I _much_ prefer Elixir’s way of not using imports much and prefixing most function calls with the module name. That way when I jump to a function definition, I often don’t need to do any further jumping around—I know exactly where everything is coming from right there. I especially hate when language idioms encourage using a shorthand name for the module when importing. Just be explicit!

> [...] I especially hate when language idioms encourage using a shorthand name for the module when importing. Just be explicit!

THIS. 100%!

I also think Python seems to do this better with the namespace pattern of `import x.y.z`. Never seen the `import *` pattern before, didn't know Python can do that (never done Python really though either, only browsed some code here or there).

But one addition to your statement I'd like to add: not just idioms, but documentation can spread this antipattern too.

Noobs come to a language, read the documentation for some tool, library, etc., and understandably come away thinking "Oh ok, this is how it's done" when in reality, while they're not doing anything wrong per se, they're being taught a bad habit right off the bat. And that makes the inevitable hard slap across the face that reality (or a co-worker) eventually hits you with all the more shocking, possibly even leading to resentment of others when this happens.

I myself had this problem many years ago; "why the hell do you need all these subclasses and so many layers of abstraction?!" I once asked. "Because those class/method names already exist within other classes loaded at runtime, or will likely be added in future external dependencies, and we don't want to accidentally overwrite those."

Well, I thought that was a stupid idea. In my (very limited at the time) experience, when I had that problem, I just insisted on finding a damn good descriptive name that wasn't going to be used for anything else, even if it was wicked long or a pain to type. Too long? Don't be lazy. Typos more probable with length? Learn to copy/paste. That was the approach I learned at some point from some documentation somewhere on the internet, and it always worked for me, so it was the "one true path" and all others were inferior, making proponents or users/adherents of any other way "wrong" and inferior to me and my superior, non-lazy intellect. You're telling me that you do it differently? You must be stupid or lazy, and I'm neither therefore I'm better than you; thus forcing me to adopt your inferior standard?! Preposterous! Insulting! Offensive! That's how I saw it.

God, was I dumbass.

I developed this attitude by learning my skills in a near total vacuum, other than documentation written by people on the internet back in the 90's. Not official company-published technical docs, mind you, but forum and BBS posts, mailing list discussions, and way too much IRC to be considered "healthy" by any stretch of the imagination.

Those who taught me did so in a way that was efficient and they did no wrong at all of course - this failure was entirely of my own making, from my own arrogance. But it's also true that had I been taught at the same time that "there are other ways in other languages that are totally valid; this is just for simplicity's sake" and then been shown, or better yet required to use other patterns in school/projects/etc., I might have been able to see the short-sightedness of my own arrogance sooner. Instead, it resulted in an antipattern that allowed me to become overconfident, arrogant, and more difficult to work with than I should've been.


In case you're wondering, I can't remember what I read or where that came from so long ago. Sorry. Might have been C, C++, Java, DOS BASIC (non-GUI/non-Windows), Visual Basic (before .NET) or maybe even PHP 3.something, no idea - that was 20+ years ago. I was inexperienced, arrogant, and a fool; none of that was the author's fault whatsoever. We're all young and stupid at some point in life. I'm no longer young, but at least now I know I'm stupid! :-)

Yeah that's been a major pain in my ass more than once, totally get where you're coming from there. One tip for you, and anyone else who might be reading this from the future (hi! do we have flying cars yet? what about time machines? can you bring me one of each, in like, 10 minutes?): RubyMine, the IDE from JetBrains (commercial, Java-based desktop app but not nearly as bad/sluggish as you might expect, though not native fluidity) has a feature to show you where a given method ultimately comes from, going all the way up the chain of source code and abstractions to where it's really defined, including within Rails or other libraries/gems/imported files in a given project. It's a really killer feature for this particular pain point, in my opinion.

Short of that, assuming all your dependencies are unpacked under your project's directory (likely .gitignore'd but there nonetheless, instead of in somewhere like /opt/ or /usr/local/lib or something) you could also `grep -ir 'def method_name' | less` or something to try to figure that out, though it's nowhere near as smooth as RubyMine.

Other editors have Ruby and/or Rails plugins that may help ease this pain point as well, though personally I haven't done anything with that on the editor front in a very, very long time. Generally I'm outright militant about keeping my dependencies to an absolute minimum in a project, and doing everything the most "vanilla" (standard, accepted, most widely adopted patterns, naming conventions, etc.) way possible, and therefore I usually know what gem/lib is providing what particular method, so I don't really need that kind of utility most of the time because I can google up some API docs...when there are API docs anyway.

But in cases where you're having a legacy project/monolith dropped in your lap and nobody knows anything because "the business guy" fired the one developer who actually knew how everything worked and now that guy won't answer his calls because working for free is dumb? Yeah, in those cases, RubyMine or a real good editor plugin can be a lifesaver for this.

If that RubyMine feature works well these days then that's great! In my day it was nice but hit or miss.

It isn't a good thing. My first advice to all newbie Ruby programmers is to stop thinking about methods and start thinking about messages.

I don’t see how this helps?

Ruby's dispatch is Smalltalk-like, not Java-like; it is (very) late-bound. The consequences for interface design are dramatic, the natural style being one in which objects notify each other, rather than telling each other what to do, and the result is loose coupling and ease of composition.

People trying to write Java-like OO in Ruby end up confused and frustrated.

There's a tradeoff between coupling and cohesion. All the books and talks are about decoupling because that's the interesting part. The problem is that in poor codebases you find everything has been decoupled and the complexity is a nightmare. As Adele Goldberg said: "“In Smalltalk, everything happens somewhere else.” It's better to have a system where you only decouple where and when you want to because you have decided that paying the cost is worth it.

They used to pay me to turn monoliths into microservices, now they pay me to turn their microservices back into monoliths...

I've worked with Java, Ruby and Python but I don't get your point. Can you recommend any resource to help understand what you mean? Or maybe some short examples in Python and Ruby that would highlight how implicit imports allow solving problems in a way that explicit imports prevent?

Not the parent, but I’ve found Sandi Metz conference talks to be an amazing resource for learning more about this message focused approach.

But back to the original point, Sandi Metz wouldn't suggest that implicit imports are good.

Sandi Metz's talks, and Avdi Grimm's MOOM course.

Ok, but Objective-C also does messages and most of them are defined statically and can be easily searched for in a codebase. I’m not saying Ruby needs header files to send messages to something, but being able to figure out where the code for something is isn’t off the table for languages with dynamic dispatch and it can come in handy.

For that, there's

but frankly this is still thinking in a rigid mindset that suits other languages better. Ruby isn't just "dynamic dispatch"; a typical metaprogramming technique handles all incoming calls without named methods, or by dynamically writing the code.

To put it bluntly, assuming there's a method on the other side of your message, is practically the antithesis of Ruby.

It's similar to the category error that leads folks to conflating type with class, and writing type checkers that look for class signatures.

While freeing to write, that is horrifying to inherit.

That isn’t code you write in an application. The topic was how to find a method’s handler, this is how it’s done in the REPL.

Or if you mean, dynamically instantiated methods, those are found throughout Ruby and its ecosystem, and the reason it isn’t horrifying is that the language has terrific support at the REPL for developers needing reflection.

Yeah this is pretty much why I decided I didn't want to write Ruby anymore. You may be right that this is the philosophy of the language, but if it introduces a speed bump or cognitive overhead when trying to understand what a snippet of code is doing (and I contend that "there might not even be a method there responding to your message" is a speed bump), then it's not a good thing, even if it's clever and somehow ideologically pure and interesting. I can appreciate programming languages as art pieces, but when I'm at work I want them to just be a tool.

The best features about this bridge are that it's made out of soap bubbles and it doesn't necessarily span the river at any given time. This flexibility is wonderful.

That is not a good metaphor for anything, really: no-one was enlightened; at best some reddit-grade snark was conveyed.

I worked with Ruby for two years. The metaphor is apt when considering software development as an engineering discipline. It was certainly not meant as snark. You disagree, that's fine.

Not really sure what you think I disagree with, since the metaphor is so vague and wishy-washy it could apply to anything from theology to stamp collecting, but it sounds like trying to conflate late binding with some manner of bad coding style.

Well, since I have been privileged to work with several really great Ruby teams over the last couple of decades, whose large and well-structured code uses metaprogramming techniques yet remains easy to follow & comprehend (and debug and extend), and who remain splendidly productive to boot, I can say, if this bunch of guesses at the intended meaning of that "soap bubble" metaphor hold water, then yeah, I disagree.

Conversely, shitty code can be (and is) written in explicit import and early-bound languages. The enterprise world, for example, is riddled with massive hairballs of brittle, unmaintainable, boilerplate-heavy Java.

All of which only goes to show that people, not the programming language, are the problem.

I can't think of a single point in my RoR career where this has come up. How do you even do Java-like OO when classes are so loosey-goosey anyway?

Though having suffered in a Rails codebase written by Java engineers, I definitely agree that a light touch is needed with OO -- though composition has its own difficulties and together this forms one of my bigger criticisms of Rails.

There's plenty of people writing java-like code in Python too, just check Amazon SDK for python or GCP SDK, it's a mess, I don't know if they get frustrated but they do get SDK users confused and frustrated.

You need to be using RubyMine. Simply command-click on ANYTHING to jump to the definition, even if it's nested deep in a gem that your project uses.

RubyMine, and the "jump to definition" feature in particular, is the primary thing that enabled me to really "get" Ruby and finally understand how everything worked together.

“git grep” is pretty amazing for tracking things down.

That's like someone asking for directions and you giving them a shovel and a introduction to archaeology manual.

If someone asks for directions you give them directions or a current map.

Well, no, not really. It is more of the equivalent of showing how to read (or where to find) maps instead of pointing out something on one. If the searcher is in a hurry the latter is more helpful, otherwise the former is preferable.

Yeah, but you shouldn't need git grep to find out what something is.

I understand using to find out why it's there, but not what it is.

Yeah the problem is that grep understands lines of text but not the structure of code. But code should be understandable by a computer program (by definition it has to be at some point) so you should be able to use a tool that understands the structure of the code it's reading rather than just seeing a bunch of lines of text.

Personally I find it faster than using an IDE much of the time. It’s super fast and very powerful.

Of course that's what we end up doing in practice, using external tools (CLI or IDE) to work around a limitation in expressiveness of the language. This advice is a testimony that there's indeed a problem upfront :) And it comes short when searching from commonly used names...

This advice is a testimony that there's indeed a problem upfront

Indeed, and the problem is that IDEs have encouraged developers to write convoluted code and to forget how to navigate a filesystem. Reading code and commit history is a much faster path to understanding a system than an IDE's autocomplete. I learned this from Ruby, and it serves me well in C, Kotlin, Scala, PL/SQL, Typescript, etc.

> IDEs have encouraged developers to write convoluted code and to forget how to navigate a filesystem.

I would argue the exact opposite. Developers who force tools designed 30 years ago to dictate programming language best practices today are holding back the entire industry. There is no reason for my tooling to depend on my filesystem; imagine if my IDE integrated directly with vcs, and a language server provided both of those tools semantic information about my code, available on any device. Let's not kid ourselves either, utf-8 isn't human readable on disk, it's still a binary-like format.

You're right to remind that the filesystem is a metaphor, a model and not the reality of what's going on on the computer. The thing is it's still the common denominator between different IDEs and text editors. IDEs cannot be used in every situation so any professional developer who uses an IDE also knows his way around with a text editor. If we were to come up with a better abstraction than the file system, it would have to be shared between different IDEs for interoperability. My understanding is that Smalltalk was such an attempt but somehow that aspect of it didn't really spread. Are you aware of current progress being made in this area?

I did that for many years, then I switched to languages with good static analysis and felt really silly about how many years I had wasted wandering around in darkness.

Would a Ruby IDE be able to figure it out?

In my experience on a IntelliJ with Ruby extensions (so working like RubyMine), no.

Input and output parameters on a program with hundreds of thousands of lines becomes .. rough, to say the least. DryStruct has helped a lot; but several teams working on a single codebase with their own opinions of method calling patterns evolved over several generations all in a single codebase, and it’s … rough.

RubyMine was impressive (because it's a hard problem) but still hit or miss when I was writing Ruby about 5 years ago. It may very well have improved a lot since then!

Those are fair points, but I just kind of hold the entire program space in my head. I work on a pretty large and complicated Rails app that does real world things (handle money, deal with people and companies, etc) and it's daunting at first but after a while it's sort of become one big system of objects interacting with other objects in a global objectspace and that's how it exists in my head.

I think for people that come to love Ruby it feels more natural to think about a program that way as opposed to thinking about a purely lexical context of a piece of code.

I think this is actually a very efficient way of programming. but it doesn't scale in project size and team size. Similar for Lisp. This is why these languages are really good for prototyping, but not as good anymore for big, longterm projects imho.

That's also why I don't think it's a great idea to add types etc. to programming languages like python or ruby. They are not made for this and they should focus on what they do well. Use the right tool for the job, don't try to make every tool being able to be used for the same task.

This is just not true. There are several multi-Billion dollar public companies with millions of users and thousands of employees built on Rails.

In fact, the philosophy of convention over configuration et all makes Rails MORE scalable and easier to onboard new people than anything else out there that I’ve seen.

I agree, the convention means there are very straight-forwarded places to look for things

I recently worked on a Rails project for someone who didn't like to create scaffolds, models or controllers for small things, thinking that it would add to the bloat. But conceptually it makes everything 10x times easier and faster to find and understand

Well, you'd better not check what happened to all those Rails apps when those companies grew.

There are soooo many blogs about migrating away from Ruby, adopting Go, Scala, whatever statically typed thing.

There's a reason even Ruby is getting a proper static type system.

RoR is the most popular choice with Y-Combinator companies.

Its productivity is simply hard to beat.

Most of these companies are unicorns or public. https://spreecommerce.org/ruby-on-rails-most-popular-among-t...

Dropbox literally hired Guido for a while and sponsored a performance oriented fork of Python.

I doubt they can be listed as a Rails success story.

I dont think Dropbox even use Ruby. They have been a Python shop since day one?

Stripe doesn't use Rails either. That was a pretty bad article. And it is somehow from Spree.

Um, really? Why are unicorns like Shopify or Github are still using Rails if that's the norm?

The same reason Facebook uses php: that's what they started with and it's prohibitively complex, expensive and risky to change at their scale.

Shopify is not a unicorn...sorry for nitpicking. Its a publicly traded company worth 210 billion dollars. It used to be a unicorn a long time ago.

Now that I think of it, are there any major startups launched post Rails glory days? Some time around and before Rails 4.0, so 2013.

Almost all the major former startups that I could find are from 2010 or earlier.

Depends on your definition of major. For a company to become a unicorn it usually takes 7 years (not sure on the number but it keeps changing). So many companies formed in 2015,16...+ aren't there still. Looking in Linkedin for Rails jobs there are a lot, not all of them are old companies. Gitlab is a very famous name(2014) but it too IPO'ed so it's not a startup anymore.

Gitlab definitely counts ;-)

Which isn't to say lexical scope isn't important in a ruby program -- it's paramount, especially for resolving constants -- it's just that after a while it's not that big of a deal that there is some ambiguity about where things come from. For me it's one of the most intuitive languages I've ever worked with but it requires a sense of intuition that is learned over time, as contradictory as that sounds.

It also requires a certain sense of confidence that whoever named that method you're calling gave it a good name. Fortunately (and not, I think, coincidentally) the ruby community has always tended towards good naming being really important.

Explicit programming is more readable, but can result in a lot of boilerplate. "Magical" programming is much more intuitive once you know where the magic comes from but can be maddening to debug. Any framework large enough to be useful starts abstracting away implementation details.

I've found that people that gravitate towards Ruby enjoy meta-programming. The running joke is that you're not a Ruby programmer until you've decided that you should write a DSL. Like everything, clever programming has its tradeoffs.

These days no one reads the source code of every method they call until something is not working the way they expected it to.

Personally, I find the JS ecosystem that absolute worst for this because of the amount of imported code to layer basic functionality on top of the standard library.

If you've worked with rails long enough - well at that point things like Devise are probably in your mental model of how they work.

Personally I still find the large unbridgeable chasm between Python 2 and 3 to be a large friction point in the ecosystem. But if I worked in Python a smuch as I worked in Ruby, I may find Ruby more troublesome.

The simple fact is all languages are getting better. The best tool is not always the one that would be best for the job, sometimes it's just the one you know best. And the tradeoffs we make for ease of programming and abstraction can then turn around and mean we are pushing megabytes of JS over the wire when we don't need to, and now we are trying to solve that with more explicit inclusion and automatic tree-shaking in build pipelines etc.

It's all good. Every tool has two edges. I'm much happier to see people continue to engage with this edges are figure out how to make them more effective than to always argue about which edge is sharper.

Magic requires memorization.

Memorization is not consistent between teams/team-members.

Memorization is also the fastest skillset folks lose if they step away from the language/toolset for a while (it degrades rapidly)

Worst of all - The RoR ecosystem (6 versions in) requires memorization about which set of things you need to have memorized for this specific version of the framework.

I'll take some boilerplate code EVERY FUCKING TIME. The magic becomes a curse.

What framework are you using then? Rails is pretty intuitive and easy...I am biased yes but nothing I saw from Spring or Django made me think they made more sense than Rails. The whole Bean thing is pretty crazy. Django is an over engineerd Rails from what I saw.

> Without those imports, how will people reading the code find the definitions? How will they track down the source code for all the methods being called?

I think in dynamic languages like Ruby and Python, there are two very different cases that we need to distinguish when thinking about this.

The first one is global references to modules/classes/functions. For example, in Python you may write:

  import my_module
  foo = my_module.Foo(some_arg)
So, if you wanted to find the definition of that my_module.Foo() reference, it's easy: go to the definition of my_module and search for the Foo class/function definition.

In Ruby, you may see instead something like:

  require 'my_module'
  foo = MyModule::Foo.new(some_arg)
Which looks very similar to Python, with the difference that the `require` method doesn't assign any local binding, but just executes the file my_module, which in turn will modify the global namespace directly and make `MyModule` available at runtime. The `require 'my_module'` might not even be present on the file that's then using `MyModule`: it might be indirectly required by another module that this file uses. Or, in the case of using Rails or any framework with auto-imports, the file might not have any `require` calls at all!

So, how is this resolved?

By convention, basically. If you see a reference like `MyModule::Foo` and want to see the definition of that, the convention is that that definition will live under my_module/foo.rb, so you can go to that file with some confidence that you'll find the the code for that class there.

So, if the convention is strong, and people follow it, this is basically a non-issue, and the go-to-definition can be done as easily as in Python, with or without an intelligent IDE.


The second case is when you have a dynamic reference to an object/function/whatever, like:

  def do_something(arg):
In this case, neither Ruby nor Python (or any dynamic language in general) will help you much in trying to find the definition of the `foobar` method, as the type of the argument will only be known at runtime at the moment that this function will be called, and can even vary between calls.

There are different ways to make this go-to-definition somewhat more ergonomic than a global search for `def foobar`. But on the language level, i think both Python and Ruby are basically the same in this case: not very helpful. We then compensate by using good naming conventions, type annotations/comments, or using powerful IDEs that analyze the code to make good guesses about who is calling what.

I just left a rails gig to go back to django. I like ruby ok but not a big fan of rails.

I totally see the value and understand why someone would like it but I just never could get to that point with it.

I much prefer the explicit imports, loops, etc I suppose

That's interesting, what you think Django excels that Rails doesn't? Which ones were the pain points in your opinion?

I think it’s just preference really. I don’t think Django is better I just prefer some of it’s choices.

For example, I really don’t like that rails implicitly imports stuff into the namespace.

I like being able to follow the imports explicitly at the top of the file. I feel like there is way less of “where does this come from or what IS this?” when working with Django.

Autoloading is a major source of pain in Ruby, I think.

What function did I import, exactly? Which of the 37 “update” calls did I call when I called it on this?

> I’m sad about the move to mounds of JavaScript. Nothing can match the productivity of simple_form and bootstrap.

You might like some of the recent Rails 7 previews. I believe they’re removing Webpack out of the box and trying to simplify things on that end[0].

[0] https://weblog.rubyonrails.org/2021/9/15/Rails-7-0-alpha-1-r...

to me, this is what makes the upcoming rails 7 release really exciting: no more node, npm, webpack, & yarn! all that js tooling complexity thrown right out the window, replaced with the much simpler set of es6 modules, turbo & stimulus (and maybe strada too).

There seem to a lot of ruby pieces falling into place for Rails 7.

The Achilles Heel of Hotwire apps has previously been the low number of supported websocket connections and high memory usage when using ActionCable and Puma but I have high hopes that Falcon[1] will take care of that.

That along with Github's View Components[2] and Tailwind make me really please with the way Rails is heading right now.

1. https://github.com/socketry/falcon

2. https://github.com/github/view_component

yes, websocket connections/memory use was a problem, will have to take a look at falcon.

i'm personally not a fan of tailwind (or bootstrap) for being overly/cryptically verbose, but i understand why it's popular when coupled with view components, which do seem useful.

>Node.js comes so close but anemic standard library

The nodejs API is rather big really. If you're using Ruby as your standard of normal sized almost everything would look small in comparison, but "anemic" is a stretch. You'd really hate Lua.

If you're taking suggestions, you might enjoy Janet[1]. It has a pretty large user API, lots of bells and whistles included in the standard libs like you'd find in Ruby, and lots of different ways to achieve the same result like Ruby. It's been a while since I checked out the ecosystem, but I was _very_ happy with the Joy Framework[2]. It definitely doesn't come close to the scope of Rails, but I think it _does_ near Rails in it's easy of use for the developer with it's scaffolding, controller generation etc.

[1]: https://janet-lang.org/

[2]: https://github.com/joy-framework/joy

>If you're taking suggestions, you might enjoy Janet[1].

Just curious about why you'd suggest going from a popular stack to Janet? I'd rather suggest Common Lisp as it's a way more stable, mature and standardized technology.

Funny you should mention Lua. It’s a fun little language I very much enjoy. Lua isn’t typically used for business applications with heavy logic and a lot of data manipulation.

I got sucked into React, graphQL and Nodejs and and always missed Rails, with all the hype I never thought that the environment would be so far behind. It's all my fault.

But now that Rails has incorporated Stimulus as a sane approach to the client-side of things, I'm coming back and it will be difficult to take me away again.

> I’ve tried purely function languages and I just can’t wrap my head around them because they abstract too much.

Not sure what you mean here specifically (do you not understand Enumerable.inject in the Ruby stdlib? That's about the hardest thing to understand and once you're there, functional langs are pretty trivial) I was already coding in a functional style in Ruby (PORO, intentionally not mutating vars, etc.), so that wasn't an issue for me at least, when I hopped to Elixir.

The kicker was when I spent an entire month debugging a problem that would have been impossible to occur in Elixir/Phoenix- I was working on a million-line Rails codebase and there was a session issue where sessions would seemingly randomly get dropped... it was nondeterministic and absolutely maddening, a number of other devs had attempted to fix it and failed, and I finally did- and it ended up being a key-overwrite issue with a regular Hash being merged with a HashWithIndifferentAccess in the ENV variable passed between rack layers (and god, was that hard to find! I had to write custom debugging tools that walked through every Rack layer and deep-diffed the ENV hash, etc.). That was the last straw.

So far, Elixir/Phoenix has been fantastic for me at NOT producing "flagging tests", nondeterministic-seeming bugs, etc. Massive productivity boost.

> Then there is Rails. Nothing even comes close

Have you tried Phoenix? It's really close. It's better in some ways. The data layer is quite different however. As much as I like Ruby, Elixir made a lot of things I liked about Ruby better

Yup. The lack of libraries made it a non-starter. Rails has just about everyone beat in that category.

I think many would be pleasantly surprised at this point, especially considering the entire Erlang library is also trivially accessible:


Are you happy with the way things are going for Elixir? It seemed to me like the Elixir community was hoping for it to become the next Ruby and I don't see that happening anymore. It will have to settle for being a well respected but obscure piece of tech, kinda like what Erlang is.

Elixir and Phoneix are more popular than ever, especially with the release of LiveView. The ecosystem and community continues to absorb refugees from Ruby and other stacks.

It's such a better platform than Ruby and Rails that I won't bother writing about it here - there are already hundreds of blog posts about it. Productivity is at least on par with Ruby (I believe better) and performance on another plane of existence.

Sorry that really doesn't add up to what I see when I look at job boards, Elixir has really poor numbers. The Ruby guys who wanted to jump ship to Elixir already did so a few years ago (some of them will continue to jump ship from Elixir to Rust or Go because hey why not). So I don't know where new growth will come to Elixir.

It's not just jobs, by any metric you can think of Elixir is an obscure tech, if you want I will add sources to this claim but hopefully we can agree.

You’re right. Software development has devolved into a popularity game.

I think people may be traumatised by what happened to Perl or Cobol. They don't want to become obsolete. In reality what happened to Perl isn't the norm imo.

PHP is the more recent one. Folks who worked in PHP but got out at the right time look at all the low-paying PHP jobs (and there are lots of them!) and see mostly-PHP-experience job candidates dismissed out of hand for higher-paying jobs that would offer work experience outside PHP and think "there but for the grace of God, go I".

PHP has declined but its by no means a dead or dying tech. Pay should be good in the good companies (Slack? MessageBird? I am sure there are some big names I dont follow the PHP world that much). What I am saying is that even declining tech can provide stable income for decades. Perl is a different story though. I dont know if the Perl people can still get jobs writing Perl.

Oh no, it's not dying—there's a ton of PHP work out there—it's just that most of the PHP dev market is very stagnant, wage-wise, outside a handful of companies, in a way that most languages in wide use are not. There's also a real stigma that goes along with still being mostly a PHP person these days, I've noticed. (I was one, like... 9 or 10 years ago)

Pepsico, Toyota ...come on now that's weak. They're not even IT companies let alone Tech. Whatsapp is running on a fork of Erlang afaik , not Elixir. Look I'm not saying no one is using Elixir, I'm just saying there's little jobs that's all. Cherry picking some famous names isn't gonna change that.

It's a difficult metric, that one. In part because I get the impression that high-specificity tech like Elixir and to a certain degree Clojure attract people who, once they're in the job, _never leave_, and their productivity is high enough (if you believe the marketing) that you just don't need to grow a team as hard and as fast as you would on a more "popular" stack to get equivalent outcomes.

If all that's true, it's a natural consequence that better tools will have fewer job postings: you need fewer people to get the job done, and the ones you've got have really good retention. However, so will tools on the other side of the bell curve: nobody in their right mind wants to go there, either on the supply or demand side. So all popularity really tells you is where the enterprise hype machine was, five, ten, fifteen years ago such that "nobody got fired for choosing X" and "learn X! It's got a lot of job postings!" still align.

But the enterprise hype machine is optimised for tools that have a very specific set of surface criteria: enormous libraries so you're not inventing anything, rock-solid vendor support so there are people being paid to evangelise it into orgs (or, like JS, are ubiquitous anyway, but I think that particular lightning only strikes once), and "easy to learn" ergonomics such that developers asymptote towards replaceable cogs in both quality and quantity.

"Well respected but obscure" describes a certain sort of success, by that reasoning. It just needs enough of a community to stay alive, and that's a bit more of a toss-up. Ruby haemorrhaged people in the Rails 4-5-6 cycle as people jumped ship to node, but Rails 7 and Ruby 3 are looking extremely tasty despite that, and I wouldn't be surprised if we saw a bit of a resurgence in popularity over the next couple of years outside big enterprises.

Most of the people I interview have got into tech after Rails 3 was released, and generally aren't aware of why the Rails/Phoenix/Django concept was such a breath of fresh air. I think what you'll see is that the reason Phoenix and Elixir have a "next big thing" isn't technical factors at all. It'll be a rediscovery by the new generation when someone unexpected goes unicorn and the story is "it was all down to Phoenix", even when the real factors were elsewhere.

Plus these companies tend to have tens of techs stacks and hundreds of projects, you never know if what they're highlighting isn't some small internal project used by just one team and developed by more or less a lone wolf developer, with the project on life support after said developer left the company.

Pepsico and Toyota are sponsors for elixir events on the regular.


how about PagerDuty, Divvy, Slab? I once worked at a company where the CTO said 'elixir is an unknown quantity and I'm not sure we want to rely on it' and here we were using three critical services (one financial!) that were running on elixir.

> there's little jobs that's all

wasn't an issue for me. I applied for five elixir jobs in a limited sector (fintech) and got an offer.

Have never heard of Divvy and Slab. PagerDuty is a nice company though. Thats still not much. You can't compare it to Go or Kotlin or Swift who are about the same age as Elixir.

I guess you aren't working in startups. I'm at literally my third company that uses divvy as their corporate card.

You probably haven't heard of klarna, or discord, either.

It's very weird to assume I'm some dinosaur because I haven't heard of a company that doesn't even have a Wikipedia page and hasn't IPO'ed yet. We use Recurly to do subscriptions and our expenses aren't done in the U.S (maybe that's the thing?).

About Discord see my other comment - I don't think Elixir is their main language.


> From day one, Discord has used Elixir as the backbone of its chat infrastructure.


> Another is a re-architecture of our guilds service, which as our biggest communities have grown, started struggling to handle 100,000 connected sessions per Elixir process. We horizontally scaled the Elixir processes so that we can scale as the number of connected sessions increases... Many of our stateful services that maintain connections to clients, and process and distribute messages in real-time, are written in Elixir, which is well suited to the real-time message passing workload. But in recent years we have run up against some scaling limitations. Discord has written and open-sourced several libraries that ease some pains we’ve encountered with managing a large, distributed Elixir system with hundreds of servers

Yet arguably, exposure to Live View turned the Rails ecosystem back towards tools like Stimulus Reflex and Hotwire, etc instead of everyone going for the very heavy Rails API + React stack which honestly is overkill unless you absolutely need a SPA or are building out multiple front ends against your backend

Discord, an app with 300 million users, is apparently built on an "obscure piece of tech" >..<

maybe come out from under that ruby rock and smell the dev air now and then

Also, I was on Ruby since the time when it was "trying to be the next Java" (this was prior to 1.0!). All good things start out small.

Anyway, after the initial learning curve, much happier working in Elixir/Phoenix than I ever was in Ruby/Rails... especially from a code-writing AND code-maintenance standpoint. Immutable-everything is the way forward (which also massively aids concurrency). It also being 10x faster and far better/easier at working in a concurrent context are also perks.

1. I don't know Discord that well but looking at their Github it lists Javascript, Python, Rust, C++ and Elixir (in that order). Is Elixir their main language? Their careers page has tech all over the place - Rust, Java, Python, I actually am not bumping into Elixir. Could be a coincidence but it reinforces my feeling that Elixir is not the main language on Discord (if they even have one).

2. The amount of users Discord has has nothing much to do with anything. Facebook has 2 billion users and its programming language is Hack. So Hack isn't obscure? How many people know and use Hack besides a few thousand Facebook employees? If Facebook gets 2 more billion users in the next decade will Hack become 2x as popular?

Elixir is the language that ties all those other languages together there and was also their first language, until they had to branch out into those other ones for performance or other reasons.

> The amount of users Discord has has nothing much to do with anything

The argument I was arguing against was that Elixir is niche, not argumentum ad populum. It's no longer really niche, just elite. ;)

I see Elixir pulling in a lot of people in with Liveview, especially those in the node/react world who are getting tired of the churn and build complexities.

> I’ve tried python, but I end up having to deal with the mess of importing modules

It's funny, the fact that Ruby doesn't have a facility to import modules is one of my biggest gripes with it. Instead of importing into a specific scope, the global namespace becomes a dumping ground for anything that has ever been loaded, and you need something like Zeitwerk to do auto-loading for you. Which I guess can be seen as a convenience initially, but it makes things harder to manage as your app grows in complexity.

I have mixed feelings about autoloading. I never use it for my own stuff, ever: explicit `require`s all the way. But I do appreciate not needing the boilerplate in Rails.

It would have been nice to be able to do something like: `MyActiveRecordImport = require('activerecord')` to take control of the constant name, but honestly it's not something I hugely miss.

You nailed it. I work with Ruby, TypeScript (React, Node) and Elixir.

While Elixir has more than enough uses and I love it dearly, nothing comes close to the productivity and expressiveness of Ruby (+ Rails).

I have very high hopes for JS/TS. Using a single isomorphic language for web development must have some benefits, but I'm afraid it's not there yet.

> Just about every array operation I could want is there, waiting for a code block. In other languages, things don’t flow like that.

This is a question about standard-library. The standard-library of kotlin is equally amazing (but Kotlin more or less requires the IntelliJ eco-system of course).

Bet one entire language's future on a single framework isn't working very well I guess.

It is 2021, web frameworks are ... one of the solved problems. Polyglot is the future, and people are already adapting

> Polyglot is the future, and people are already adapting

Yes, right. I like how you can read into the future. Let's see what happens 10-20 years from now, things tend to change. But even if we look at how things are now, plenty of places still commit to 1-2 languages (in fact if you're a small shop it's pretty crazy to try anything else).

It depends heavily on the problem you're solving. A few jobs back I was in a small shop and we were all regularly bouncing between Java, C, Ruby, JS, Perl, Python... Go crept in round the edges too. We wouldn't have been able to deliver without the full set just because of what we had to interface with. It's more than doable as long as the culture is "learn the necessary skills".

> plenty of places still commit to 1-2 languages

That might be true, but it didn't mean they will commit to ONE language for forever. Even governmental agency's code gets rewritten in newer languages eventually:


C, Java, Javascript, Ruby, PHP, Python, all these languages are 30-50 years old. I don't see any of them fading away the coming 20 years from now (you might argue about Ruby since it's much smaller, but it has a very strong community keeping it moving). You could bet your career on any of these and I'm pretty sure you will be fine 20 years from now.

I am a polyglot outside of work, across many fields beyond just tech. Wordpress themes, managed networking, and home automation have been my latest tech projects. (Let’s make light switches less reliable!)

Since I’ve run Linux servers at every job I’ve had, I’m trying to get my toe into devops at my current employer. Getting all of our application into Docker has been fun. (Why does everyone run shit as root in a container!? Dropping permissions isn’t difficult!!!)

That's easy to say, but for small teams and startups it's difficult to achieve.

I like your honesty.

I’ve bounced around over the years and I’m doing a lot of Python now. It’s funny You said that about “loops” because I do feel like I’m always looping something in Python ha

I quite like Kotlin when it comes to a language's "flow." Pity there's not a better web server framework for it

We've been building on top of Spring Boot with nodejs as a part of the build process for front-end assets.

It's not as nice as rails, but we love kotlin so it's hard to give up.

> [...] I’ve used a number of languages and dabbled in a few frameworks but nothing I’ve used brings me joy like ruby does.

Totally agree, 100%. Ruby isn't "perfect", but it's a pleasure to use. As in, I actually want to use it, not just am "ok" or ambivalent about it. I have my nitpicks about Ruby, about Rails and other things just like anyone, but I second your comment that nothing else has been able to match that sweet spot of great productivity that is a direct result of a fantastic developer experience created through a programming language that just "flows" linguistically so much easier than...well, everything else I've personally ever seen (with Python being an arguable tie there).

My biggest gripes about Ruby and its ecosystem have been the pain in the posterior it is to deploy apps (all those gems, which eventually wind up abandonware and now you've got dependency hell, especially with apps not maintained in 5 years or something) and its relative sluggishness when compared against some other deployable artifacts.

Personally, I think Crystal (https://crystal-lang.org/) fixes pretty much all of this, as long as you're willing to cede a few things due to the nature of the beast (compiled vs. interpreted).

And that's not considering the speed improvements Ruby's gained in the last few years (3x3); it's undoubtedly far better now than when I last released an app with Ruby or Rails (circa ~2016, maybe ~2017ish). I'm just so disappointed that everything's "JavaScript this" or "Go(lang) that". Not that I have a problem with Go (I do have many problems with JavaScript, which I strongly dislike, but that's a separate topic), it's just that this industry acts like lemmings in a lot of cases. "New shiny!" attracts the horde, and that critical mass creates a new tyranny of being the "snowflake" technology/stack/developer, which has a lot of risks for both the organization paying for what you build in tech stack $X (rare language, can I replace this hire later, can I find somebody that knows $X, lower risk if we use $Y b/c we can find plentiful/cheap talent in $Y or $Y salaries are lower than $X), and by that token therefore it's a risk to the developer's career to get hard core on anything perceived as "snowflake" or otherwise not "flavor of the $(month || year || interval)".

Which is just so sad, and I feel like it's a contributing factor in Ruby's decline (in terms of hiring demand for full time positions). It's not at all the technology's fault, and it's not a performance "issue" whatsoever anymore (really wasn't in the first place unless you were nickel and diming literally everything or did stuff just plain stupid).

Go, Rust and friends all have their own benefits and drawbacks too, and they're fine languages with their own killer features and/or quirks. They can produce pre-compiled code for a variety of platforms from a single machine, resulting in (if desired) a single file binary deployable artifact that can be installed and simply run, no dependencies to install, no OS configuration necessary. Ruby, without lots of hacks potentially questionable hacks and potential future abandonware, doesn't do that at all AFAIK, but Crystal can also produce fat binaries just like Go can (and I assume Rust can too), making it the best of both worlds in my opinion.

Crystal: compile Ruby-like code to a "fat" binary for single file, zero-config/dependency deployment that runs true multithreaded apps as native code. And most of those tradeoffs you'd have to give up because "compiled" - most of those you can work around pretty easily and I've heard you can even have it embed source code in the binary to be run in interpreted mode at runtime, so you can have your app compiled for the vast majority of use cases, then have it run its own code inside itself interpreted/JIT'd when run, giving you access to many (all?) of the features you'd otherwise think you'd have to sacrifice.

So yeah, I love Ruby, and I think Crystal is definitely the next evolutionary step for that language and ecosystem. No hate on Ruby there whatsoever, I just see it as a more mature option for a lot of use cases, but definitely not all. I don't know if you can do that metaprogramming magic Ruby is so amazing at any faster in Crystal since you'd have to run the code as interpreted at runtime, not pre-compiled (AFAIK), so it's not an outright replacement. Still, I think it's damn near one, and you can probably "color outside the lines" just a tiny bit as needed when you absolutely MUST have that feature anyway.

Okay, end stream-of-consciousness. I haven't been able to sleep for 3 days, so rambling is a sort of unavoidable side effect...sorry about that. But yeah, try Crystal if you haven't already, you'll likely be very happily shocked at how amazing it is!

I wonder why Crystal is not more popular today that it reached 1.X, as you say, it fixes some important Ruby issues but most importantly it also fixes Elixir's weaker points.

I can't comment on the Elixir stuff since I haven't really touched it at all yet (only ever seen one job opening in hundreds over 5+ years mentioning Elixir - only one, total!). But I have a theory on the rest...

I think the actual reason why it's not more popular is something I personally call "The Tyranny of the Masses". In short, what's popular/flavor-of-the-month is automatically "right", and everything else is automatically "wrong" or "high risk" in our industry. These are of course not real demonstrable facts, but people's perceptions that influence choice of technology stack.

Take JavaScript/Node.js for server-side apps for example. MASSIVE ecosystem. Near total ubiquitous skill availability in the developer market (with one major caveat I'll explain below). Not perceived as a rare or difficult skill to pick up, so salaries can be on the lower side than, say, a C developer with Linux kernel experience, for example. To employers and managers, this is a much more attractive mix than something where all of the above are in any way seen as "less true".

Compare that to Elixir. Not around nearly as long as JS, far fewer learning resources, relies on perceived turbo-nerd/esoteric "Erlang" runtime, so support availability may be harder to find and more expensive (so what's that mean for the employer's support and availability contracts, uptime agreements, and/or regulatory environment should something go sideways?). Not nearly as many available published libraries as on NPM (yes I'm sure Elixir's are far higher quality, but remember: it's perception, not fact, at play here). Skilled hires in market not nearly as common or ubiquitous as JavaScript devs are. Nobody has 20 years of Elixir experience. 20+ JavaScript experience does exist though (again, perception matters, not the fact that JS 20 years ago was laughably bad and would almost never even run today!).

Then there's the "safety" factor for production apps. JavaScript? Billions of apps all over the place. Nearly ever prod scenario imaginable has happened and 90% of them have at least been blogged about. Async/await in single threaded runtime using multiple processes for concurrency? Yep, somebody's done it. True multi-threaded single-process concurrency? Yep. Big, perceived as financially stable tech company behind it? Yep. Large market/ecosystem of firms that provide high quality support at reasonable prices to compete for business? Yep. Elixir? Arguably none of that, and definitely not to the quantity - again, not quality, quick-glance perception, optics here is all people look at - that JavaScript has behind it.

Is Elixir a better language, runtime, and ecosystem than JavaScript/Node.js? Almost certainly, and you can probably prove that pretty objectively with only some very obscure/esoteric exceptions having merit. I mean, dude, it's JavaScript, that's a low fucking bar! (Yes I'm a JS hater, go ahead, flame away! "YOUR BOOS MEAN NOTHING TO ME, I'VE SEEN WHAT MAKES YOU CHEER!" - Rick Sanchez)

But is it popular? Not compared to JS! JS is like that air-headed valley girl high school cheerleader that every guy wants to sleep with, and Elixir is that nerdy girl with a good heart, a great head on her shoulders, who can respect people and while she covers nearly every inch of her skin with clothing, leaving everything to the imagination, even a blurred photo would tell you that underneath all that, hot damn! She's the total package. But you want the cheerleader because of the popularity contest. You'll be the big man on campus if you get with her! Well, maybe not "the", more like "a", one in an ever-growing series.

Fast forward to 5-10 years post HS graduation. Cheerleader? Trailer trash with 6 kids on food stamps and zero drive in life whatsoever. Nerd girl? CEO of her own startup, and somehow even hotter now than she was back then!

Oh how foolish we are for falling into the trap of popularity. And while some may feel justified in their subservience to that popularity because reasons, some of them even good, we'll all feel curse the lack of choice, the absence of competition, the void of creativity and innovation - that we inevitably acquiesce to the clamors of the mob. Our future's bright hope will be made dim indeed when we succumb to the Tyranny of the Masses.

EDIT: The caveat on JS skills I mentioned above? Almost everybody who says they "know" JS is really at best an intermediate-level developer with it. It has a lot of special use cases, variations in how things work depending on engine, execution context (server app? browser engine?), and even today in 2021 we still see even self-described "experts" telling you to use alert() to get debug data instead of at the very least console.log/warn/error, etc. The language has made some frankly just plain weird (IMO flat out bad) design choices, like allowing more arguments to a method than its signature declares, and used to be so vastly snowflaked (nearly zero consistency between browsers) that you absolutely had to write an ENTIRE new JS app for each browser you wanted to target. Granted, that was the Netscape Navigator days, when Internet Explorer (Exploder?) was version 5, the rightly-maligned version 6 didn't exist yet, Firefox was a twinkle in somebody's eye, and Microsoft even shipped IE for the Mac (like OS X 10.1 or 10.2! Remember Aqua, the striped textures? Ahh, the good 'ol days...) It's come a long way since, but it still fundamentally suffers from having more bad documentation than good, and while most consider it a "lowest common denominator", that's unfair to people who really are true experts in the language, who will tell you that when you get real deep with Node or V8, you can see some gnarly shit. It's a god-awful security nightmare; why the hell would I let whatever website in the world run whatever code they want directly on my machine all willy-nilly, automatically accepting every API call by default, denying almost nothing, with unrestricted access to some pretty advanced/low-level APIs that can do some serious damage if abused? And people say Flash was bad (well, they did only after Steve Jobs made a big deal about it; he wasn't wrong, but virtually NO ONE hating on Flash back then had any idea how it even worked before Steve bitch slapped it into oblivion and rightly so). Its syntax is arguably archaic (we don't need semicolons to end statements/lines these days, language parsers are much more capable now; look at Python, Ruby!), it's littered with inconsistencies in the language itself, and since everybody's a hammer (JS "expert"), everything looks like a nail (a suitable use for Node/JS/DOM/Script), even when it's damn well obvious, painfully so, that it's absolutely NOT. All this, and yet everybody's somehow an "expert" with JavaScript. Really? Really?

In all these years the only thing I've seen has been despise against JS, if it was popular it was 'cause we hated it, we hated how bad designed it was, but then Node appeared and someone said that isomorphic JS would be a good idea as looking for backend engineers would be easier. Now there are tons of JS libs in npm, but a big percentage are quite minimal, unnecessary and of questionable quality.

It isn't complicated. Crystal doesn't work on Windows. If elixir ecosystem is small, Crystal is 10 times smaller. Unless there is an active community that decide to do the grind and fill in all the missing pieces. Crystal may remain a niche in the foreseeable future.

Well, you can run Linux using WSL in Linux so you can get Crystal working there anyways.

I’d like to use this to voice my appreciation for Ruby. Its ergonomics, flexibility, elegance and power continue to bring me joy and make me smarter every day.

It is incredibly malleable and yet pure. It carries a certain warmth and kindness that permeate its community.

It becomes faster every couple of years :)

I feel so grateful to work with a language that makes me feel like a wizard and look forward to writing code.

The community is so incredibly creative, smart and generous that I feel humbled every day.

I second this. There's something about the language that makes the community incredibly welcoming (or is it vice versa?)

Whereas the impression I get of the javascript community is that everyone is paranoid about doing the hottest new thing, like "oh you're still using x? how quaint, haven't you heard everyone is using y now?"

Ruby was built for programmer happiness and has always had a passionate community. During the height of its popularity, it was just like JavaScript with the paranoia. Now it’s a mature language and things are settled down. I miss _why and the whimsy the language had before.

Honestly I don't get why some people want to move to static types. Ruby is a dynamic language, that's the point of it... Giant orgs can just use Java or something. We need some languages to stay productive for those of us who work solo or in small groups. If I wanted static types I'd use Java, Go or something (probably Haskell).

This is a set of statements that strike me as pretty unreflective of the state of things these days. I have slung a lot of Ruby in my life and I literally-not-figuratively stopped the second I laid my hands on TypeScript because we've hit the point where gradual static typing is both easily available and super easy to work with. (And there's also Rust, which can scratch a whole different set of itches that I don't happen to personally have.)

If your idea of static typing is Java or Go or "something", if your idea of it is that it's for "giant orgs" and not improving your own correctness and throughput of code (I write better code, faster, in TypeScript than I ever did in Ruby!), yeah, it might not make sense. But that's a pretty out-of-date take, I think, and the wins the suitably plastic individual can get, even on a solo or small-team project, are significant.

The static languages I've used most are actually Haskell and recently Pony (mostly for things that involve a bunch of maths and processing in parallel). Just brought Go/Java up because it seems everyone is trying to turn every language into those.

I don't want Ruby to become another TS because I'd use a typed language if I wanted one. The problems I use Ruby for don't need the speed of a statically typed language and it's nice using a dynamic one for.

I think what the parent comment is saying (that I agree with) is that it's easy to over-estimate the cost and under-estimate the value. You don't _have_ to do anything with types, but it's another tool to use where it makes sense. Or put another way, provided it's not mandatory what's the downside to having another tool in the toolbox?

The downside is the ecosystem turning into the JS/TS ecosystem. Where the zealots have pushed TS to the point it all just looks like C# and what makes JS nice is being lost.

Do you have any specific examples of things that make JS nice that have been lost because of the popularity of TS? As with types in Ruby, I was under the impression nothing was really lost as TS could be added incrementally as developers saw value.

I don't think I've talked to anyone who has gotten "over the hump" with TS and felt like they were missing something from JS - I may be in a bubble though. It's almost frustratingly beloved in my experience.

I’ve been writing TS for a year now and I find TS annoying. Especially for react components with state management of some kind, the types get so complex you almost need unit tests to assert they are what you think they are. Additionally, TS being a structural type system with no access to nominal types at all eliminates a whole class of “ghosts of departed proofs” modeling techniques. (And, I know you can work around this, but those workarounds are ugly.)

My view is that I’d use TS if I have to but I’d pick either plain JS / CLJS or something like purescript if I really wanted types.

The year is 2021.

People argue for a toolchain that takes a static language, compiles it to a dynamic one, then runs it with an interpreter on the server side.

Humanity ended shortly thereafter.

I haven’t touched the ecosystem really for a number of years but my last JS project was in TS and I hated it for the same reason. The overhead of bringing in yet more npm modules and types and build steps for questionable levels type safety. Languages with much better type systems exist.

Such as?

The dynamic nature of JavaScript is lost when using TypeScript.

Although TypeScript can be added incrementally, in practice I've only seen it totally replace JavaScript.

While I do believe more people prefer TypeScript over JavaScript, I think it's because those people never deep dived JavaScript or bothered to learn it enough to see how powerful it really is. There are also people that just prefer typed languages and will shun languages without static types.

This really is a religious war and boils down to one's opinion. I like JavaScript without typing.

Once you have more than one team in the same codebase the opinions start to line up as people start to think the types will save them from stepping on each others' toes. Which it might! Publishing types is also a poor-man's contract testing, so people who like that sort of thing will like that sort of thing.

I have implemented a JavaScript virtual machine. I feel confident in my knowledge of JavaScript as a language and as an operating environment.

What power am I no longer able to leverage using TypeScript when it's appropriate to do so?

Why the downvoting? You somehow upset the typed mob I guess...

1. What's wrong with C#?

2. What made JS uniquely nice?

I wanted to leave these questions entirely open to answer, but I'll add my own opinion on (2) because I feel compelled: nothing, JS is quite possibly the worst language ever designed. Certainly the worst in widespread use.

Why is JS one of the worst languages ever designed? I've used many languages and JS is one of the better designed ones in my opinion.

>Why is JS one of the worst languages ever designed?

Implicit type conversions.

Function scoping.

Null and undefined, what is the difference?

Accessing an undefined variable doesn't throw an exception.

Assigning to an undefined variable without var puts it in the global scope.

No integer type.

I could go on and on. The only reason it has become successful is because it has a monopoly in the browser.

My favorite recent discovery is calling a js function with too many or too few parameters. It'll just go ahead and do it.

There’s an implicit “arguments” function argument through which you can access the extra parameters. It’s JSs way of function overloading

Which looks like an array but isn't.

There’s a lot of that kind of thing in JS, isn’t there? :(

I'd add

- Actually using any of the distinguishing features of prototypical inheritance is nearly always a bad idea, which makes the use of that model in the first place very questionable. One of the cornerstones of the language is pretty much one big foot-gun, to be wholly avoided.

What is [] + [] in JavaScript?

There is the kernel of a nice idea in JS but so many bad early design decisions can never be walked back.

I would argue that TypeScript does yeoman's work in doing much of that walking back.

I plumb forgot that assigning a variable without `var` or `let` put it in the global scope because TypeScript yells at you for it.

> I'd use a typed language if I wanted one.

I think the general point the above commenter is trying to make is that not all typed languages are equivalent: i.e. if you're avoiding TS/typed ruby/etc. because you don't want something like Haskell/Java, then that's not a well-informed decision as they're (all) radically different approaches to static typing, each with their own unique benefits and drawbacks.

TS is nothing like Java nor Haskell. Nor Rust.

(I can't personally speak to Ruby/Pony yet)

Wait, Pony as in https://www.ponylang.io/ ? Since this is the first time I've seen someone bring it up, can I what your experience using it has been like? And also what you were using it for?

That's the one. Experience is that it's very good at what it does: multithreaded/distributed programs. Good C interop. It forces you into it's paradigm, but it's a good one for creating performant code across cores.

Tooling isn't great, it's missing things like a language server (compiler gives good error messages at least), but it's also quite simple (whole syntax fits into a smallish page) and the documentation is decent.

Playing around with it for doing economic simulations. Actors + good multithreaded performance = pretty much perfect for the domain. Makes sense too, the creator and half the core members came from the financial field.

Ruby is my go-to for scripting, one off programs, and anything web-related (blog, putting together a small crud site). For things where I want performance, I just use a compiled/static language.

you don't use static type for speed, the first time i write some code in TS(wh compile to js) i typed something of web-storage the editor say this function assume string and you put a num, if i use js i need to run the program to know that. is cool ultra expressive typing(haskell) but you don need nothing so fancy to have huge gains in productivity. ruby and python are grate langues to write scripts and rails make a lot for you but having the compiler take your hand slaps bugs like nothing

> But that's a pretty out-of-date take

Gradual/optional static typing are not new ideias. It’s just that they are fashionable now.

It used to be that not having to deal with types at all was the cool place to be in. Our computers were getting so much faster every year, why would performance be a concern? Programmers are more productive in dynamic languages and computer time is cheap, etc, etc.

The “correctness” pitch, the required for larger projects, compiletime vs runtime errors are all discussable, even though they are often thrown in the conversation as irrefutable advantages of static typing.

The performance angle, not so much. Static will almost always be faster then dynamic typing, even with all the crazy tricks we’ve developed over the decades.

It’s not that they’re fashionable, it’s that their ergonomics have improved significantly over the past decade. I saw dynamic types as a response to cumbersome static type systems, but increasingly type systems are becoming more expressive and less cumbersome.

Yeah, this is how I see it, too. Sort of a gradual convergence.

On the static side, Java and especially C# are simply _better_ than they used to be. Type inference is great - I can just write var x = new List<Thing>() rather than having to stupidly repeat the type e.g. List<Thing> x = new List<Thing>(). Add in generics, and lambdas, and about a dozen other things that have now become widespread, and it's really a different game than 10 years ago.

Coming the other way, I never expected to see Intellisense-like autocompletion for languages like Ruby or Python. It honestly feels magical. Refactoring has gotten much better to where I can pull methods up and down inheritance chains using PyCharm or other mainstream IDEs.

I actually think it's mobile that's pushed people back toward static. Swift, ObjC, Java, and Kotlin are all static and there's less emphasis on this "one language" idea that drove a lot of people toward trying to use the same language (JS/TS) for their React front-ends and node backends.

But C# has had everything you've listed for over 10 years. Along with a great IDE that has provided very useful intellisense since pretty much the beginning.

You couldn't (easily) use C# to write the frontend for a web application that you wanted to use as either a developer or a user, though (which is to say sit down, ASP.NET Web Pages, nobody ever liked you).

That's the lock that TypeScript turns, IMO.

There aren't that many languages that do this even today, much less 10 years ago. You've basically turned what was a discussion about static typing in general into Javascript vs Typescript.

Not only C# and Java, but C++ has auto, Typescript has let, Rust has let and I think many other languages have type inference.

When it comes to code, I'm fairly certain "correctness" is in fact an irrefutable advantage.

The argument has always been whether or not the price of that correctness is too high.

But at this point if you're using a modern IDE / code editor (e.g. VSCode), it's actually easier to write statically typed code because the inference / auto-completion / etc is so much better when you do.

At least with TypeScript in VSCode.

> The argument has always been whether or not the price of that correctness is too high.

No. I mean, that's been part of the argument, but so has whether static typing actually gave you useful correctness. Many static languages have had type systems that are more designed around convenience of compilation (and performance of compiled code) than correctness; the type systems of the optional typecheckers of modern dynamic languages are leaps and bounds better than static type systems of most popular static languages a couple decades ago, and have been for some time.

> But at this point if you're using a modern IDE / code editor (e.g. VSCode), it's actually easier to write statically typed code because the inference / auto-completion / etc is so much better when you do.

If you have good inference, statically typed and dynamically typed code is virtually indistinguishable TypeProf-IDE for Ruby, discussed in TFA, generates type signatures by inference from plain Ruby code.

Strong disagree with the idea that dynamic inference steps to static declarations, particularly for Ruby and especially if you're using anything that expects you to write code inside of its DSL. The inference tends to fall off and because it isn't expected of you, nobody's doing anything to help there.

Maybe Rails is special-cased enough to get away with it, but I still maintain a project in Grape and none of the autocomplete solutions I have found for Ruby help significantly at all. Meanwhile over in TypeScript, I literally can't remember the last time I used `any` or `unknown` except at a module's edge during validation of untrusted input, and autocomplete is awesome.

> it's actually easier to write statically typed code because the inference / auto-completion / etc is so much better when you do.

Ruby has a pretty nice language server, nice linter, REPL (Pry), many other tools.

The best development experience IMO is still stuff like SLIME or Smalltalk environments.

"Intellisence" just makes using statically typed languages bearable. It's not a unique feature.

Intellisense will allow you to do explorative programming, finding what method is callable on a given object. It’s much easier to look up n methods than to try to guess what is the whole state of the program where it may accept this and that, where even in the best case you might get a fraction of what a statically typed language’s autocompletion allows for.

Like just seriously try intellij with java and pycharm for a php code.

Ruby has (always had) a REPL and .methods gives you all possible methods you can call on an object. Editors with completion have been around forever too.

"Intellisence" is just what MS calls it. It's not unique to statically typed languages. Completion has existed for dynamic languages probably longer than I've been alive. In fact, the reason tooling for Java/C# is so good is because the VM can feed it info as it's running, since even though the language is static the runtime is dynamic.

I have used that REPL extensively. I've written plugins for Pry for my personal usage. There are absolutely cases I've run into where it is nice to break into code, write it inline, and save the buffer. This is true. But I've run into more cases where having to do that to know what I'm working with in the first place is a tiresome slog.

Please try one out, the difference in quality is staggering. Also, .methods is a runtime thing, that’s hardly useful. Of course at runtime you must know the available methods.

I do write code in statically typed languages (C++, Haskell, have done Java before, etc...).

> Also, .methods is a runtime thing, that’s hardly useful

I think you missed the whole point of a REPL and dynamic languages... Developing while program is running = runtime things are definitely useful, they provide instant feedback (including completions, linting, that sort of thing).

Or try RubyMine for Ruby.

The challenge of providing a list of available methods in a dynamic environment is real, for sure, but there are great tools out there doing it right now.

That’s why I used correctness in quotes, because it’s no guarantee.

It's not a guarantee, but an environment/language that yells at you when you try to pass a boolean to a function that should only ever accept a string is going to result in a more correct program than one that just... doesn't say anything and lets you happily do things that make zero sense.

With Visual Studio also. Intellisense works wonders.

> Gradual/optional static typing are not new ideias. It’s just that they are fashionable now.

I'd add that Common Lisp has had optional type hints (as both documentation and performance improvement) for getting close to 40 years now.

You might not care about types but the computer you are running your code on, cares very much.

Are they releasing processors with typed registers now?

AFAIK registers and assembly is untyped.

That's true, however I think you're misunderstanding how languages work. The Ruby interpreter knows the "type" of everything. How could it not? It was there when you defined a variable, and it was there when you added data to it. It's there when you define interfaces and other things that might affect the type of something. It's perfectly capable of holding all of that information. And there's no meaningful difference between parsing "int x = 3;" and "var x = 3;" in a program.

I've gotten a lot of value from integrating Sorbet into a non-trival 15 year old Rails app. The gradual nature of the typing is very nice.

I've never written TypeScript but I suspect the tooling around Sorbet is pretty far behind at this point but it's still worth it. For example, there are is a whole class of unit tests that no longer need to be written. In addition to the gradual typing, having access to interfaces, typed structs and enums is all nice too.

Yeah, Sorbet and the progress being made in Ruby-land makes me want to go back and give it a good look. I really like Ruby. I just really like writing silly in/out tests less.

    I just really like writing silly in/out tests less.
Were you writing a bunch of tests to make sure that A is passing arguments of the right type to B?

I'm not sure that's a great use of time. If A is passing the wrong thing to B, B will throw a `NoMethodError` anyway once it tries to do anything with the arguments, which will make the spec test fail anyway.

But maybe I'm misunderstanding what you mean by "silly in/out tests"...

You can get away with that if you're writing something for immediate delivery, though I think it leads to a lot of not-my-problem thinking in teams that have to scale. On the other hand, I write a lot of libraries, both for internal and external consumption, and there are a lot of operations, in that context, where you won't get a `NoMethodError`--serialization, for example. You can happily serialize an integer if it's passed instead of a string. That doesn't mean it makes sense, or whatever is consuming it is going to be able to make heads or tails of it, and having to run code in order to know whether you've made trivial mistakes is shitty and demoralizing to an end user when they have to divine what a `NoMethodError` means when it's thrown deep inside of a library.

It can also then create security issues around untrusted input; you should be sanitizing at module boundaries any time something might be sensibly used with rando input, IMO, rather than relying on web developers who may or may not be competent enough to duplicate other consumers' effort to do it.

Having to do less work to enforce sanity at module boundaries, and having it tied in with the type system when you do have to do it, is a powerful force-multiplier. For example: I use `runtypes` to create validators in TypeScript when I must handle untrusted input and it's smart enough to take the validator I specify and create a type of the same shape for use at compile-time, so users who aren't dealing with untrusted input just have the nice computer cross the T's and dot the I's for them.

Those are really good counterexamples and I love your points.

However... just kidding, there's no "however." Great reply.

    You can get away with that if you're writing 
    something for immediate delivery, 
I've worked on some big Rails monoliths and this just wasn't an issue very often.

But as you say, I think library code is another story.

Incidentally, whether library or application code, I've found that keyword arguments with well-chosen names have reduced this problem even further. It's not clear that `do_something("blue")` is incorrect, but `do_something(user_age: "blue")` is self-evidently bad.

Out of date how? Tons of people write pure javascript and not typescript. They are all not as productive as you?

I'm saying that I am significantly faster and more correct TypeScript than JavaScript, and I know literally no people who are starting greenfield projects in JavaScript today.

People choosing to write JavaScript in 2021 might be plenty productive. I trust their code much less, though, unless they're writing a battery of tests to establish sanity at their module boundaries and making the extra effort to ensure correctness that you get very cheaply with TypeScript.

YOU are faster with Typescript. Can you accept the fact that certain people are not and do not like Typescript? You live in a Typescript bubble so you conclude everyone is doing that. Just open a jobs board and see for yourself not everyone is doing Typescript.

Ruby isn't forcing you to use static types. That being said, I actually think something like Sorbet makes even solo/small teams more productive because it trades some additional time writing boilerplate for dramatically reducing the number of bugs you write.

A good test suite is better at keeping the code bug free.

Not needing types is another benefit of good testing.

That's my experience, at least.

I don't think that static typing obviates the need for tests. However:

* Static typing catches many kinds of bugs earlier by simply not allowing you to write incorrect code in the first place.

* No matter how good your test suite is, you're still putting the burden on the human to always remember to write tests for corner cases.

* Static typing allows you have to write many fewer tests by making invalid data unrepresentable. You don't have to write millions of unit tests of the type "what if this list is empty" if the function literally can't accept an empty list.

> Static typing allows you have to write many fewer tests by making invalid data unrepresentable. You don't have to write millions of unit tests of the type "what if this list is empty" if the function literally can't accept an empty list.

We just don't write these kind of tests on our Rails codebases and we are fine. The world hasn't exploded yet anyway. If some piece of code is super tricky and sensitive then sure maybe then (though I have yet to see such a test case I think), but as a rule? No.

I'm surprised that you never have bugs of the type "this thing is nil and I didn't expect it to be". This is by far the most common type of bug I see in production Ruby applications and it simply can't happen with something like Sorbet.

Null exceptions can even happen in Java. I wasnt refering to those.

But most types (besides primitives) in Java are nullable, right? Sorbet literally wouldn't let you write this code:


Null is simply the most frequent example of this issue. Getting an integer rather than a string is super common, for example. Or a string instead of a date.

What does super common mean? Do I see these bugs once a month? No, I do not. Null exceptions are common I agree with that. For me its not enough to appreciate something like Java or Go but I understand the argument.

If you do TDD, you catch bugs before the code is even written.

I don't write a million tests. There are much better strategies for testing.

Simple tests that just executes the code will catch the vast majority of type mistakes.

If you're just testing pieces of code in isolation, then you're either writing lots of tests or making assumptions about the kinds of input that your function could reasonably receive or return. If you're testing end-to-end, you're either writing lots of (much more expensive) tests or you're missing corner cases. I don't think that's problematic per se. Testing is just a bit of a mismatch for the problems that type systems solve.

Do you think it's possible for well used types to displace the need for some tests?

I'm not the person you're responding to, but I would say: "yeah, but it'd probably be bad practice."

What would you be testing for, exactly? That `SomeClass#some_method` raises a `NoMethodError` when you pass it the wrong thing?

You could test for that, but I don't think that would be a good use of code or your time. If A is passing the wrong things to B, your specs will fail anyway on that `NoMethodError` once B tries to actually do something with those improper argument(s).

I suppose it could be useful in cases where you'd stubbed out B in your specs, but ehhhh.

My experience with a medium-size but long timespan project was the two things you mentioned a bit dismissively are actually pretty massive time sucks. People are terrible at stubbing, and refactoring without types is painful. The vast majority of my Ruby experience is within Rails, so maybe that's a contributing factor?

Having types not line up at various boundaries (DB/API/reading from a file) is already a pain, but Ruby made it worse by having that bad data pass through many layers until it actually blows up somewhere far removed from the issue. I worked on very real bugs where e.g. a corner case lead to a date being deserialized as a string and then because the last few characters were numeric interpreted as a number so when treated as a date resolved as millis since epoch (or something similarly crazy). It took ~1 of those bugs for me to be convinced that I had no interest in dealing with those kinds of problems, and adding a 'Date' type means it fails in exactly the right place immediately and is a 2 second fix.

I agree with you though - it'd be a silly test to write, so how do you get the correctness/robustness without either types or tests that look like they're effectively validating types?

> Having types not line up at various boundaries (DB/API/reading from a file) is already a pain

That's not really accurate - Postgres (for instance) types ARE mapped to Ruby types whenever you read something from the database, just like they are being mapped to Java types or any other language. I guess you mean you can have inadvertent type coercion where a Ruby string is saved into a PG numeric column or something?

Anyway for super sensitive code (let's say payments/prices) you can work with DRY types or Sorbet or any other solution. As a rule it's quite rare that I actually see these problems. Can they occur? Sure. Do they happen so often I wish I was using C++? No. In fact I can't even say it happens more than once a year that I see this type of bug.

> I suppose it could be useful in cases where you'd stubbed out B in your specs, but ehhhh.

You're talking like that's rare and not something people constantly do when writing tests in Ruby.

It is of course super common. Unit tests should be stubbing out as much as possible/reasonable.

The presumption is that we're talking about a well rounded test suite with both unit tests and integration tests, and the integration tests would indeed be catching the fact that A is passing entirely the wrong thing to B.

Of course, this does mean we're leaning heavily on the test suite here. But, in any nontrivial application I hope that we would have a robust test suite, right? Otherwise we're going to have some problems whether we're statically or dynamically typed.

Over time I have come to view mock- and stub-heavy tests as next to worthless.

In my experience this here (usually some method unexpectedly getting passed `nil`) is the single biggest class of bugs in production Ruby applications.

Also, what happens when you don't get a NoMethodError? Duck typing is an extremely common practice in Ruby, which means that you can easily run into situations where code "runs" but the output is nonsensical.

    In my experience this here (usually some 
    method unexpectedly getting passed `nil`) 
    is the single biggest class of bugs in 
    production Ruby applications.
Right, but how do tests help you here?

Your specs aren't passing those unexpected nils, and if they were, your specs would fail on the resulting NoMethodError.

Static analysis can find these potential problems in a statically typed language - I miss the days when Resharper would let me know about potential nulls/nils. That's a strong argument for static typing. But, this particular comment thread is about addressing this class of bug in Ruby via tests, and my vote would be generally no.

    Duck typing is an extremely common practice 
    in Ruby, which means that you can easily run 
    into situations where code "runs" but the output 
    is nonsensical.
I've been working with Ruby full time for years (admittedly, mostly boring CRUD Rails apps) and I've just never found this to be a problem. It obviously can happen, but I just don't see those name collisions ever happening.

> But, this particular comment thread is about addressing this class of bug in Ruby via tests, and my vote would be generally no.

I'm pretty sure I started the comment thread ;).

I think there's a couple of scenarios when a function gets some rogue input (e.g., a string instead of an integer) which then triggers a NoMethodError. I agree that tests don't really help you here - what are you testing, that your method correctly throws a NoMethodError? That being said, to combat this issue you often see dynamically typed codebases littered with scattershot validation, i.e. fail gracefully if you somehow wind up with an unexpected input. And then you often write tests for that. Static typing often obviates these tests, because presumably the rogue input has already been handled further up the call stack. Stubbing is the other issue; I think it happens much more frequently than you suggest that A is tested (stubbing B) and B is tested in isolation; or perhaps there's only one test of A which doesn't stub B, and that test doesn't happen to trigger the NoMethodError.

So I think we're mostly agreeing, but I do think you get to write fewer tests overall.

> It obviously can happen, but I just don't see those name collisions ever happening.

It has happened to me (I think in the Rails context less often) but I agree it's not exactly common. It is deadly when it does, though.

I came the other way. As in, tests displaced types.

Did Java for 13 years. Then moved to Ruby and lost the type system I had leaned so hard on.

After an adjustment period I got into the new groove of just testing all code, and I don't miss my hard typed days at all.

I think people remembering to test all the typing-related corner cases is never going to come close to being as effective as something enforced by static checking -- and even if it did, it seems like I've lost all the time I saved and then some, if that's the price of not having to write the declarations.

This has been my experience using Sorbet.

I sometimes get annoyed when I get stuck screwing around with the RBI files. Then I get in the flow and remember how fast Sorbet allows me to move.

dramatically reducing the number of bugs you write.

I would say that's a big overstatement.

Have you tried writing a Ruby app with Sorbet vs without? I have, within the last year, and that's been my experience.

It's not, because "bugs you write" includes bugs that don't make it to production. I doubt there's a programmer who's worked in a language without static types who's not familiar with the "write, run, read error, find silly bug, fix silly bug, repeat" development loop. Usually it becomes second nature.

> Honestly I don't get why some people want to move to static types

I don't understand how anyone that has experience with dynamically typed languages and the insane runtime errors that can result from them would ever consider using a dynamically typed language. It's terrible and it actually provides little to no benefit in development speed. People always say development is faster in a dynamically typed language, this is not my experience. You need to actually run the program and step into it with a debugger in order to determine the type of anything at run time.

Given the popularity of typescript and how nearly all major internet companies have moved to typed versions of their dynamically typed languages it's clear to me the whole dynamic typing experiment has failed absolutely miserably.

> People always say development is faster in a dynamically typed language, this is not my experience. You need to actually run the program and step into it with a debugger in order to determine the type of anything at run time.

You just develop on the running program... If you just write it in your editor, run, check for errors, stop, edit more, run, stop, etc... then yes, you don't gain anything.

> You just develop on the running program...

Ah yes, don't worry about correctness at all, just wing it. I take it you haven't had to debug some of the stuff you've written?

You asked about the productivity boost... Well it's in being able to code, debug, etc... a running program.

Did you miss a decade or two of CS history? Lisp and Smalltalk have been around long enough that the value of programming with dynamic languages is known... I mean, there's shaceships and whatnot running on Lisp.

Or did CS simply start and end with Java?

Why are you bringing up Java? Java is terrible in its own way. Of course you will be more productive without types if the alternative is only Java (which it isn't). Typescript has an extremely expressive type system.

>I take it you haven't had to debug some of the stuff you've written?

Most Ruby devs work as contractors. They deliver the software and go to the next paying contract. They don't have to live with their software. :)

>Given the popularity of typescript and how nearly all major internet companies have moved to typed versions of their dynamically typed languages it's clear to me the whole dynamic typing experiment has failed absolutely miserably.

Python is one of the most popular programming languages.

Because there is a huge amount of non-programmers using it for single-use code.

“No true Scotsman…”

And they added types to it

Static typing is like an automatic test for a specific category of errors- type errors. That's why it's useful.

Ruby's choice of making it optional means folks like yourself who want to move fast and break things are welcome to do so, but those of us working on more mature systems that need the reliability and lack of bugs can add this on and get the safety.

Ruby is strongly typed anyway... The runtime complains if you mis-type something just as a compiler would complain. In fact, linters just catch it as you type (plus any other typos).

Mixing it up with JS (which is weakly typed)?

I don't want a move to enforced static typing everywhere in Ruby. But rbs is relatively non-intrusive. Having the standard library covered by it, and allowing people to tighten up things where they make sense makes it less cumbersome to avoid elsewhere. And it allows those people who feel they need to have type hints everywhere in their IDE to still use Ruby. I find it clutters things up more than it helps, but if it helps others then that's great.

I'm slightly concerned that people will overdo it. E.g. I've more than once wanted to pass an input to something that enforced stricter typing than necessary via guards e.g. checking its input with #kind_of? when it otherwise only needed a class that implemented a sensible #read (for example). But Rubyists are pragmatic - I think after a period of overzealous annotations (the way people went totally overboard with monkey patching for a while) most people will keep the type declarations just loose enough.

I really feel the same about javascript with typescript. It feels a heck of a lot like new grads or old c#/java devs trying to keep up with the times by switching to javascript, but then resisting the code structure and loose typings.

It's interesting though how much more robust the library ecosystem has become after Typescript attracted a new crowd that wouldn't haven otherwise gone near JavaScript with a ten foot pole..

Maybe with the new developments in Ruby ecosystem somebody will fix Net::Http so an absolute request timeout can be set to protect against slow client/server and maybe oversized payloads as well. Maybe 2030?

in what way is the library ecosystem more robust? if anything library use is down due to centralization around specific frameworks and tool chains.

That seems to be more in line with "what's the benefit of static types?" than "what benefit does JS get from static types?". JavaScript is not only a very different language than it was 15+ years ago, but also takes up a significantly higher percentage of an app's codebase. The web is a drastically different place than when JS was created.

it taking up more of an app's code doesn't mean that it suddenly has a need for typing. but even then, that assertion is incorrect, javascript has been making up a large majority of the codebase for quite some time now. Nodejs is 12 years old and jquery 15. That argument is one that could've been made like 9 years ago, but even then, multiple projects have tried adding typing and the javascript community strongly rejected it (heck typescript was pushing for it in 2014). Anecdotally, I've met considerably more and more developers coming from C# projects who are being made to do frontend work or find work in the javascript world, who seem to be very intent on pushing for typescript in their orgs.

> but even then, that assertion is incorrect, javascript has been making up a large majority of the codebase for quite some time now. Nodejs is 12 years old and jquery 15. That argument is one that could've been made like 9 years ago,

Typescript is 9 years old. React was still a project then and Angular was brand new. SPA's were still a new-ish concept then. JS was not used as widely 10 years ago as it is now, thanks to SPA's. The web was a drastically different place back then.

I am not at all saying that TypeScript should always be used, far from that. I just think it has a valid spot in the world of web development.

This is exactly how I feel as well. Though, maybe I’ll come around once I learn TS better? I recently had to use it on a new project. TS was, quite literally, a nightmare. I was amazed how less efficient and how much longer things took. We probably spent 20% of our time writing JS and 80% trying to figure out how to get TS to stop complaining.

Using a tool that you don’t understand is always like that, and is frankly foolish. I don’t know why we as a group so adamant of sitting down and properly learning something, and only form an opinion on that after we did that.

I started using dynamic languages since around 2008 (Python and Javascript). Before that I was more into C/C++.

Granted, I've only written C in University settings where I'm writing small programs. I had no idea how to write "real" programs. But with Python and Javascript it felt like I could more easily write "real" programs.

What I found out is that I quickly burned out. Around 2011 I felt like I don't know how to start making programs. Programming basically became dreadful. I taught myself to wade through it anyway, convinced I would find the joy again when I get more proficient.

What eventually made me redisocver the joy of Programming is switching to procedural programming and static typing.

First it was with Typescript. Now with Go.

Writing programs with just structs and functions is really enjoyable.

Programming in dynamically typed languages is dreadful. There's no joy in it.

This is my experience as well. Working with dynamic languages soured my experience with programming so far that I stopped doing projects of my own when I was in jobs requiring them. Sounds like hyperbole, but it's something I didn't realise until several years later, after having recovered the joy of programming (which happened because I moved jobs and started using a static language again).

I guess that in part it's a matter of mentality or personal style (I know that I think mostly in terms of guarantees, invariants and so on, which is why types are so useful to me; I feel like other people think more in terms of operations when they program, and maybe dynamic programming suits them more). But there are some huge advantages as well. The tooling is far better (like the incremental compilation in Eclipse, also available in IntelliJ but turned off by default IIRC, which highlights compilation error while you are still writing the code); there are fewer unit tests required because there are fewer things that can go wrong, since a lot of them can be enforced by the type system; most important of all, reading someone else's code is far easier because mandatory documentation, in the form of type names, is all over the place (yep, not a fan of type inference either). And there are more and more advantages.

I just don't see myself working on even a moderately sized code base in a dynamic language. The pain is too real; I have been there.

Have you wrangled with JSON using Go? Absolutely dreadful.

Have you written multiple microservices in Go? The lack of an opinionated framework often means that each microservice contains code that is organized in its own unique ad-hoc way with lots of repeated boiler-plate code. The learning-curve to understand how each service's code is organized gets old fast. With Ruby and with RoR I never have to waste time with this and I can get straight to the business logic.

When writing one-off scripts, I can see the advantages of dealing with just structs and functions.

> Have you wrangled with JSON using Go? Absolutely dreadful.

That was my first semi-serious project. It was around 2012. Yes it was dreadful. The problem is I was still programming with mentality of someone who wants to use dynamic typing.

I would read JSON from an http endpoint, and then just make assumptions as to what keys are availabe, and just read them off like this:

Just like with dynamic typing, when the keys don't exist for whatever reason, your program crashes.

I don't do that anymore. When programin in Go, JSON is just a serialization format for a struct. You have a concrete struct type and you just use json to fill it with data. It's so easy and trivial.

> Have you written multiple microservices in Go?

God no! Why would I do that? I hate microservices. I like Go because I can make a self contained application.

> The lack of an opinionated framework often means that each microservice contains code that is organized in its own unique ad-hoc way with lots of repeated boiler-plate code.

So you're talking about working within an organization with multiple insulated teams, each writing services that are supposed to communicate with each other, but the teams themselves don't follow any standard.

Well, that's just one of the awful things about microservices. I don't think the language matters.

> With Ruby and with RoR I never have to waste time with this and I can get straight to the business logic.

Having to debug a RoR codebase was the worst experience in my programming life. It's all magic. Trying to read the code helps you with nothing. You can't read the code to understand the program. You have to read the RoR documentation to understand all the magic. Worse, you can't just read a part of it: you have to read all of it. Because nothing in the code will give you any hint as to _which_ magical aspects of RoR this codebase is using. So unless you know all the magic that RoR does, you have no idea what's going on.

> When writing one-off scripts, I can see the advantages of dealing with just structs and functions.

It's the exact opposite. When writing one off scripts, I can see why someone would not want to bother with structs. You usually are dealing with strings any way (filenames, paths, keys in a json/yaml file, etc).

> I don't do that anymore. When programin in Go, JSON is just a serialization format for a struct. You have a concrete struct type and you just use json to fill it with data. It's so easy and trivial.

This does mean that you should know and define your schema in advance, which is not always doable. An alternative is to use a sum type and pattern to safely deconstruct `interface{}` but Go has a notably poor support for sum types. Static typing is not really a culprit here, but Go would make a bad example for anonymous JSON parsing.

>Have you wrangled with JSON using Go? Absolutely dreadful.

What's dreadful about it? I use it all the time and find it easy to use. I love static typing though and prefer my JSON to have a proper schema (to automatically generate transport layers in both Go and JS from the same source)

>The lack of an opinionated framework often means that each microservice contains code that is organized in its own unique ad-hoc way with lots of repeated boiler-plate code.

Our organization has a microservice generator tool which creates a new microservice for you so that you could start writing business logic immediately and didn't have to think about how to organize your code. It doesn't really do much - it defines a source generator for the transport layer (from protobuf descriptions so you don't have to deal with JSON), creates a bunch of folders ("put domain logic here" etc.), and adds some infrastructure code to connect to DB, RabbitMQ etc. while also adding imports to our org-wide common utils Go library. It took like a few days to create this tool, but the author already had a lot of experience writing microservices so he knew how to structure code and what dependencies to use.

I'm the different guy but here is my opnion:

>Have you wrangled with JSON using Go? Absolutely dreadful.

No it is not. You just take a json object, put it down as Go struct. Yes, it takes more time unlike in JS or Ruby but it makes this code much more readable. You can open a project and see what kind of json it expects as an input. All contracts are there. Not need for any kind of schemas or yml definitions (though you can generate one if you need to).

>Have you written multiple microservices in Go?

We currectlly have more than a hundred of those. The lack of an opinionated framework is a bad thing only from the management point of view. Once you and your team is done with this - there is really no difference from using a framework.

Again - yes, it requires more time and most likely not a great option for a small company without a developed background (ie no preferred ways for doing different kind of things) but no a problem for a relatively big company. We have a number of teams solving different problems and thus using different ways of building their services (micro or not).

>With Ruby and with RoR I never have to waste time with this and I can get straight to the business logic.

Yes, until you face a problem were your favourite gem is not enough to do the job and you have to go the hacky root.

PS: But honestly this whole topic is getting old. I've build apps in both Ruby (mostly not Rails though, we've had Roda) and Go and while RoR\Ruby\etc are great for one set of tasks I'd never use them for some other tasks. For example systems integrations which is my main job for the last ~5 years.

People often forget the web dev is not just your clients' browser to server communication.

> Have you wrangled with JSON using Go? Absolutely dreadful.

Yes. Of course it's dreadful. It's JSON. It's way better in Go than Ruby or Javascript, though.

:) I love typed languages and my current favorite is Rust. I got my most recent position because when asked what I if I preferred procedural style programming or OOP. I told them I tend to lean heavy on simple single inheritance schemes rather than complicated hard to trace architectures and over-engineered design patterns, the occasional interface class for OOP that makes sense, and mostly procedural concentrating on bending to the data rather than bending the data to your needs. It was the only thing that separated me from the other c++ candidates as we all had similar levels of experience and coding skill.

Dynamic languages start becoming problematic as more the size of the codebase increases. It becomes very hard to maintain.

Use whatever language you want; there is no large codebase that's easy to maintain.

I agree, but maintaining a large codebase in a statically typed language as compared to a dynamic language is magnitudes easier.

As the codebase grows larger, if people dont have the discipline to write proper code, the code will get more and more complicated since you will not know what exactly you will get. Mostly dynamic languages are easy to get into (which is a strength) but this becomes a weakness as the project becomes larger.

By the time they get giant with [interpreted language], the giant org cannot pivot to Java without sacrificing a year of forward momentum on product development.

It's far easier for them to adopt a static analysis tool that mimics the type safety that Java provides than to migrate their codebase.

Ruby is NOT becoming static. It will just have tools like Sorbet or rbs files. The community will for sure stay mostly in dynamic programming. The huge orgs like Shopify or Stripe who want more strictness will now be able to do it in their Ruby codebases.

Probably they got sick of having to read a function body (and maybe a couple levels of indirection from there) to figure out what the hell a method returns while doing maintenance programming on a hoary old Ruby codebase.

Ruby is and will be the same language even after static types. A Ruby programmer will need to opt-in to writing type annotations in an external file and run a separate type checker (that is not a part of MRI binary). The benefit is if libraries ship with type definitions, type inference can detect type errors early in the IDE. If no type definitions are available, you do not get ahead of runtime type errors.

It is possible to be very productive in a language with a static type system. In my view, it is actually much easier.

Mostly, I really want Rails in a typed language.

There's a reason it didn't come from a typed language... It uses a lot of dynamic features and metaprogramming.

That would be nice. There are so many attempt to clone it and the clones are always missing some important piece of what makes rails rails. Lack of good, heavily integrated libraries is a universal failure of all of the ones I’ve seen.

This is the most frustrating part - having worked in an enterprise Rails project, I wouldn't wish that on anyone valuing their own time. At the same time, it is the most complete offering I've used by a long shot for getting started.

I was writing a web app in Rust (I want to be cool!) and only after a couple hours of implementing CRUD did I realize I'd effectively made an inconsistently implemented rails scaffolding. The default opinionated round trip for DB -> application -> UI for CRUD is still really slick.

Redwood does this in TypeScript

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact