I remember being taught to use yacc in our compiler course because "writing it by hand is too hard". But looks like Ruby joins the growing list of languages that have hand-written parsers, apparently working with generated parsers turned out to be even harder in the long run.
That said, replacing a ~16k line parse.y[1] with a 22k line prism.c[2] is a pretty bold move.
Number of line is as any metrics, it gives you a quick idea of some amount, and that's it. To start having a sense of what it means, you need to be more acquainted with the topic at end.
It's not that uncommon to have an implementation with code that is lengthier but with an obvious pattern, while the smarter compressed implementation whose understanding is not necessary trivial to grab even for people seasoned to metaprogramming, reflexivity and so on.
Not to say that is what happen here, the point here was to recall that number of lines is not an absolute linear metrics.
I'm pretty sure the only reason people ever used parser generators is that it allows a language that vaguely resembles the formal description of the target language. I always found them very confusing to write, confusing to debug, and much less efficient than writing your own. It's actually pretty straightforward once you get the tokenization and lookahead working.
Agreed. Parser generators are a magic black box. Parsing is not too difficult, there is some actual computer science in some spots, but I think parsing should be a core complacency of a programming language to unlock full potential.
It seems like you're ignoring the context/environment. Ruby has enough advanced developers, large enough test suite and enough people who care about performance, that it can tackle the parser as a longer project regardless of its complexity. The same thing will apply to other popular languages. But it won't apply to smaller projects with very localised parser use. In those cases writing anything custom would be a waste of time (and potential introduce bugs solved years ago in generators).
Having tried both on solo projects, I disagree: like other commenters here, I've found parser generators to be a big waste of time.
Writing a parser by hand requires understanding the theory of parsing and understanding your implementation language. Writing a parser with a parser generator requires understanding the theory of parsing, your implementation language, and a gigantic black box that tries unsuccessfully to abstract away the theory of parsing.
The time spent learning and troubleshooting the black box is almost always better spent putting together your own simple set of helper methods for writing a parser and then using them to write your own. The final result ends up being far easier to maintain than the version where you pull in a generator as a dependency.
Parser generators handle a lot of edge cases for you and are battle tested.
Unless I had a relatively simple grammar or had very strict performance requirements (like in the case of Ruby), I would not trust a hand rolled parser on a CFG by someone who isn’t dedicated to the craft. (PEGs are simpler so maybe).
I’ve written recursive descent parsers by hand and they look very simple until you have to deal with ambiguous cases.
The big plus of parser generators is that they report ambiguities. Handling conflicts is a pain but explicit.
Do people who write predictive recursive descent parsers (LL(k)) really calculate first/follow sets by hand? What if the grammar requires backtracking?
There's nothing to stop you from writing out a grammar in some form that is intelligible to a verification tool and then implementing the grammar by hand. I almost always write out the grammar anyway because that's the design—without it I'm flying blind. The cost of the generator isn't writing out the grammar, it's in using the runtime code it generates, which is optional even if you want to use it for verification.
> I remember being taught to use yacc in our compiler course because "writing it by hand is too hard". But looks like Ruby joins the growing list of languages that have hand-written parsers, apparently working with generated parsers turned out to be even harder in the long run.
I've been writing parsers for simple (and sometimes not so simple) languages ever since i was in middle school and learned about recursive descent parsing from a book (i didn't knew it was called like that back then, the book had a section on writing an expression parser and i just kept adding stuff) - that was in the 90s.
I wonder why yacc, etc were made in the first place since to me they always felt more complicated and awkward to work with than writing a simple recursive descent parser that works with the parsed text or builds whatever structure you want.
Was it resource constraints that by the 90s didn't really exist anymore but their need in previous decades ended up shaping how parsers were meant to be written?
Parser generators will tell you whether the grammar given to it is well-formed (according to whatever criteria the parser generator uses).
When hand-rolling a parser, there could be accidental ambiguities in the definition of your grammar, which you don't notice because the recursive descent parser just takes whatever possibility happened to be checked first in your particular implementation.
When that happens, future or alternative implementations will be harder to create because they need to be bug-for-bug compatible with whatever choice the reference implementation takes for those obscure edge cases.
My hot take is that the allure of parser-generators is mostly academic. If you're designing a language it's good practice to write out a formal grammar for it, and then it feels like it should be possible to just feed that grammar to a program and have it spit out a fully functional parser.
In practice, parser generators are always at least a little disappointing, but that nagging feeling that it _should_ work remains.
Edit: also the other sense of academic, if you have to teach students how to do parsing, and need to teach formal grammar, then getting two birds with one stone is very appealing.
It is not academic. It is very practical to actually have a grammar and thus the possibility to use any language that has a perser generator. It is very annoying to have a great format, but no parser and no official grammar for the format available and being stuck with whatever tooling exists, because you would have to come up with a completely new grammar to implement a parser.
I fully agree that you need to have a grammar for your language.
> and thus the possibility to use any language that has a perser generator.
See, this is where it falls down in my experience. You can't just feed "the grammar" straight into each generator, and you need to account for the quirks of each generator anyway. So the practical, idk, "reusability"... is much lower than it seems like it should be.
If you could actually just write your grammar once and feed it to any parser generator and have it actually work then that would be cool. I just don't think it works out that way in practice.
Good error reporting gets really tricky with generated parsers. That said, it can be a nice time saver for smaller things like DSLs and languages early on.
Even then, yacc and bison are pretty solid overall. I believe Postgres still uses a yacc grammar today, as another high profile example. I'd arguebthr parsing of SQL is one or the least interesting thi.gs an RDBMS does, though.
I can only imagine working with generated parsers to become more difficult, if the syntax of a language is highly ad-hoc or irregular, not elegant like concattenative, or lispy languages, or Smalltalk style, which is ironic, given Ruby's history. Maybe they added too many bells and whistles.
In every other case having a grammar in form of parser generator macros should be better and preferrred, since it is well portable to other languages and tools and lends itself to be more readable (with good naming).
Ruby is getting more and more awesome these last few years, especially when it comes to performance. Since 3.3 I've been running all my apps with --yjit, it makes a tremendous difference!
No wait, I know Oracle has a bad rep which is deserved, but TruffleRuby and GraalVM is truly open-source, not open-core. They actually did something great this time.
> You will need to sign the Oracle Contributor Agreement (using an online form) for us to able to review and merge your work.
Read my lips:
N. O.
Read the CLA. This is a trap, do not get yourself or your company caught in it. It is open-source for now, until it gets enough traction. Then the rug will be pulled, the code will be relicensed as well as any further development or contributions.
This is insane, I cannot believe anyone can read and understand this and not consider the abuses of power it allows:
> 2. With respect to any worldwide copyrights, or copyright applications and registrations, in your contribution:
> ...
> you agree that each of us can do all things in relation to your contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative work (or has it made) will be the sole owner of that derivative
work;
> you agree that you will not assert any moral rights in your contribution against us, our licensees or transferees;
> you agree that we may register a copyright in your contribution and exercise all ownership rights associated with it; and
> you agree that neither of us has any duty to consult with, obtain the consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
I would go as far as to state that anyone who contributes any code to this works against open source (by helping out an obvious rugpull/embrace-extend-extinguish scheme that diverts adoption and contribution from cruby/jruby) and against their fellow developers (by working for free for Oracle).
MySQL was forked and the fork is the defacto standard shipped by linux distros. To me the only MySQL that existed was the one by Sun, now MariaDB has completely succeeded it.
Do you see the licensing/distrubution clusterfuck with Java as a good example of open-source stewardship by Oracle? Which Java disto are you using?[1]
Do you see the Google v. Oracle Java API copyright case as a good example of open-source stewardship by Oracle?
You know what else is prudently (/s) stewarded by Oracle? ZFS. That is why it is still not a part of the Linux kernel. A company that is basically a meme with the amount of lawyers it imploys would easily find a safe way to allow integration into the Linux kernel if only they wanted to contribute.
The examples above show exactly why Oracle has a decidedly bad reputation. On top of that, their CLA enshrines their shit treatment of the open-source movement and their free slave labour^W^W^W open-source contributors.
That has been the story of every dynamic language since forever, thankfully the whole AI focus has made JITs finally matter in CPython world as well.
Personally I have learnt this lesson back in 2000's, in the age of AOLServer, Vignette, and our own Safelayer product. All based on Apache, IIS and Tcl.
We were early adopters of .NET, when it was only available to MSFT Partners and never again, using scripting languages without compilers, for full blown applications.
Those learnings are the foundations of OutSystems, same ideas, built with a powerful runtime, with the hindsight of our experiences.
> Personally I have learnt this lesson back in 2000's, in the age of AOLServer, Vignette, and our own Safelayer product. All based on Apache, IIS and Tcl.
Woah, your mention of “Vignette” just brought back a flood of memories I think my subconscious may have blocked out to save my sanity.
No, I worked with the founders at a previous startup, Intervento, which became part of an EasyPhone acquisition, which got later renamed into Altitude Software alongside other acquisitions.
They eventually left and founded OutSystems with what we learned since the Intervento days, OutSystems is of the greatest startup stories in the Portuguese industry.
This was all during dotcom wave from the 2000's, instead I left to CERN.
What's a scripting language? Also I'm not sure for TCL (https://news.ycombinator.com/item?id=24390937 claims it's had a bytecode compiler since around 2000) but the main python and Ruby implementations have compilers (compile to bytecode then interpret the bytecode). Apparently ruby got an optional (has to be enabled) jit compiler recently and python has an experimental jit in the last release (3.13).
"... the distinguishing feature of interpreted languages is not that they are not compiled, but that any eventual compiler is part of the language runtime and that, therefore, it is possible (and easy) to execute code generated on the fly."
Just-in-time compilation of Ruby allowing you to elide a lot of the overhead of dynamic language features + executing optimized machine code instead of running in the VM / bytecode interpreter.
For example, doing some loop unrolling for a piece of code with a known & small-enough fixed-size iteration. As another example, doing away with some dynamic dispatch / method lookup for a call site, or inlining methods - especially handy given Ruby's first class support for dynamic code generation, execution, redefinition (monkey patching).
> In particular, YJIT is now able to better handle calls with splats as well as optional parameters, it’s able to compile exception handlers, and it can handle megamorphic call sites and instance variable accesses without falling back to the interpreter.
> We’ve also implemented specialized inlined primitives for certain core method calls such as Integer#!=, String#!=, Kernel#block_given?, Kernel#is_a?, Kernel#instance_of?, Module#===, and more. It also inlines trivial Ruby methods that only return a constant value such as #blank? and specialized #present? from Rails. These can now be used without needing to perform expensive method calls in most cases.
During their black friday / cyber monday load peak, Shopify averaged between ~0.85 and ~1.94 back-to-back RPS per CPU core. Take from that what you will.
You seem to imply that everything they run is Ruby, but they're talking about 2.4 million CPU cores on their K8s cluster, where maybe other stuff runs as well, like their Kafka clusters [1] and Airflow [2]?
Obviously you meant for the whole infrastructure: ruby / rails workers, Mysql, Kafka, whatever other stuff their app needs (redis, memcache, etc), loadbalancers, infrastructure monitoring, etc.
That's certainly not what I get out of what they said.
Shopify has introduced a bunch of very nice improvements to the usability of the Ruby language and their introductions have been seen in a very positive light.
Also, I'm pretty sure both Shopify for Ruby and Facebook for their custom PHP stuff are both considered good moves.
1. Wondering 3.4 JIT performance vs 3.3 JIT on production rails.
2. Also wondering what upside could Ruby / Rails gain on a hypothetical Java Generational ZGC like GC? Or if current GC is even a bottleneck anywhere in most Rails applications.
I would expect some measurable improvement given how object-happy rails programming is. It's not uncommon to see 3 layers of models just wrapping a single variable - "objects that could've been functions". Some kind of tiers like generations or per-request pools would be amazing.
> Also wondering what upside could Ruby / Rails gain on a hypothetical Java Generational ZGC like GC? Or if current GC is even a bottleneck anywhere in most Rails applications.
Ruby's GC needs are likely to be very far from the needs of JVM and .NET languages, so I expect it to be both much simpler but also relatively sufficient for the time being. Default Ruby implementation uses GIL so the resulting allocation behavior is likely to be nowhere near the saturation of throughput of a competent GC design.
Also, if you pay attention to the notes discussing the optimizations implemented in Ruby 3.4, you'll see that such JIT design is effectively in its infancy - V8, RyuJIT (and its predecessors) and OpenJDK's HotSpot did all this as a bare minimum more than 10 years ago.
This is a welcome change for the Ruby ecosystem itself I guess but it's not going to change the performance ladder.
I don't understand the point of it when the `.map(&:upcase)` syntax is shorter. This just seems like yet another syntactic sugar Rubyism that doesn't really add anything.
If it's an alternative to the `|x|` syntax when using only one block variable, then I like that.
I haven’t worked in ruby or rails in a few years but both seem like they’re in great spots and I’ll be spinning up a new project with Rails 8 soon. Hype
Years back I took over the ownership of the third-party Arch Linux package for ruby-build because the maintainer at the time wasn't using it anymore and was looking to pass it off. At the time, I had no idea that Ruby did released every Christmas, but I found out a few months later when I got an email mentioning the package was out of date that day. Even though I haven't done much Ruby dev for years now, it's been a small little tradition of mine since then to update the package first thing every Christmas morning and push out the update (basically, just updating the version number in a file in a git repo and then running a couple commands to update the checksums and push the changes; nothing anywhere close to the amount of work that people who actually develop that tool do, let alone the people who work on the language!). I can't help but feel like that farmer from the meme saying "it ain't much, but it's honest work"; I enjoyed the little tradition I've built up and like thinking that maybe every now and then someone might have noticed and been pleased to get the updates without having to file a notice to remind me to update things (although it's happened a few times since that time years ago, I hope it hasn't been that often!).
Just now, I was surprised to see that the package seems to be getting put into the official Arch repos, so my eight years of very minimal volunteer service seem to be at an end. I still think I'm going to remember doing this and smile a little every Christmas morning for years to come!
I want to try Ruby since the news of Rails 8 came out, but it's been so difficult that I just gave up. Installing Ruby on Mac and Windows and actually getting the 3.3 version required for Rails 8 was a huge mission and test of patience because every installer defaulted to older versions of both Ruby and Rails even one month after the release. And yes, even Docker required tweaking to get the versions and I had issues with devContainers anyway...
I finally got it installed and then followed some tutorials only to see that Rails' html.erb files have completely broken syntax highlighting in VSCode and other editors. I facepalmed and though I tried to search for a fix online, I couldn't find one. I saw posts mentioning it in forums and yet not a single solution posted.
So I gave up. I tried in Mac, Windows and Linux. If someone here knows how to fix the broken highlighter, that can be my Christmas gift today, but for the most part I've moved on.
Ruby is something like a "improved" Python, with a better OO system, a code block syntax that makes it easy to use callbacks, more consistent standard libraries, etc. It could be what Python is today.
I wouldn't say niche, but the killer app of Ruby is Rails, a web framework similar to Django. In fact, many people treat them as they are the same. But there are big projects that use Ruby and that are not related to Rails. As far as I remember: Metasploit, Homebrew, Vagrant and Jekyll.
Personally I think Ruby is amazing language for writing shell scripts. I write a blog post about it, you can see it and its discussion here: https://news.ycombinator.com/item?id=40763640
Can you name one way Ruby has parity with Python? Ruby is a dead language that uses sponsored posts here. Nobody actually uses this since like 2018 but some people are paid to hype it up. Just look at the empty praise. No real applications mentioned.
huh. I'm not sure if I understood you right, do you script and configure those in ruby, or have you written them in ruby from scratch? Are the sources available to read/learn from?
Beware that one of the joys of writing these for my own use is that I've only added the features I use, and fixed bugs that matter to me, and "clean enough to be readable for me" is very different from best practice for a bigger project.
I'm slowly extracting the things I'm willing to more generally support into gems, though.
It's the language with the highest ratio of (useful work / LOC), so it's the least verbose language. This makes it very suitable to write and understand complex scripts, because the reduced boilerplate means less cognitive overhead for the programmer. As a result, experienced programmers can be extremely productive with it.
The well-known Rails framework uses this to great effect, however, some people argue that the choice of "convention over configuration" and extensive use of meta-programming, derisively called "magic", make it less suitable for inexperienced teams because they get too much rope to hang themselves and the lack of explicitness starts working against you if you're not careful.
It depends which kind of magic. Everybody love some magic to be in its life, as long as it doesn't reveal to be a curse unintentionally coming out from a good willing wish.
Also you don't want all and everything being the results of spells you don't have a clue how they are cast.
I see some people saying that Ruby is too much "magic", while what is magic is Rails. Ruby itself can have its high useful work / LoC ratio thanks to its syntax. For example, you can spawn a thread with:
thread = Thread.new do
# thread code
end
...
thread.join
In this example we can see that it's not magic, only concise.
Rails has some very, very good features that make standing up a CRUD app with an administrative backend _very easy_.
It's also got a bunch of semi-functional-programming paradigms throughout that make life quite a bit easier when you get used to using them.
Honestly, if it had types by default and across all / most of its packages easily (no. Sorbet + Rails is pain, or at least was last I tried), I'd probably recommend it over a lot of other languages.
It's not a 100% compatible replacement, but I've ported a few things with only trivial chances. I didn't say it's a drop in, just that it's a fine choice.
Compile/test time is ok. It's a few extra seconds to run tests, but hasn't been an issue in practice for me.
I've tend to have found Kotlin to be the direction I'm more happy going with. It speaks to my particular itches for me personally, more effectively. I can absolutely see how it's a very effective choice.
I love Rails and spent a good chunk of my career using it - and I'd recommend it more if only the frontend story wasn't that bumpy over the years with all the variations of asset pipelines.
I wish the TypeScript/React integration was easier. Say what you will but there's no way you can achieve interactivity and convenience of React (et al) UIs with Turbo/Hotwire in a meaningful time.
Can you elaborate more in this? Years ago, I used to primarily do Rails development. Recently I built some web apps that use a JVM backend (one app uses Java & Spring and the other Kotlin & Micronaut) and a React frontend. One thing I ended up really missing issue the the frameworks, especially with disjointed fronted, don't solve the standard issue of a request sending an invalid form entry and showing the validation errors on the form. I ended up building my own implementation of that which of course also requires a convention on message format. Since most apps need to solve this it's so weird to be that frameworks nowadays don't solve this out of the box.
I converted from webpacker (or rather shakapacker, the continuation after rails moved away from webpacker) to vite_rails recently, and it's been such a breath of fresh air. It's easy to set up, and easier to maintain. Strongly recommended.
I definitely suggest using vite and the vite ruby gem. Create your Rails app, Create your TS + React app with vite, add the vite gem and done. It does not get better than that. Super fantastic.
Ruby has the nicest object-oriented design (everything is an object) outside of smalltalk (IMHO).
In contrast to the mess that is Python. For instance, in Ruby it is natural that each or map are methods of Array or Hash rather than global functions which receive an Array or Hash argument.
This goes as far as having the not operator '!' as a method on booleans:
false.! == true
Once you have understood it, it is a very beautiful language.
Yes, but it is not fully OO. Something like `if.class` generates an error, as opposed to returning some type such as "Syncategoreme".
That might looks really anecdotal, but on practice for example that's is probably the biggest obstacle to providing fully localized version of Ruby for example.
The second biggest challenge to do so would probably be the convention of using majuscule to mark a constant, which thus requires a bicameral writing system. That is rather ironic given that none of the three writing system of Japanese is bicameral (looks fair to exclude romaniji here). Though this can be somehow circumvented with tricks like
```
# Define a global method dynamically
Object.send(:define_method, :lowercase_constant) do
"This is a constant-like value"
end
It's very powerful though which is a bit terrifying. You can literally monkey patch Object at runtime and add methods to every single instantiated object! (I believe this is how rspec works..)
Awesome, but with great power come great responsibility ;)
RSpec moved from that quite some time ago. Monkey patching nowadays is usually frowned upon, even refinements, which could simulate monkey patching in a limited scope, are rarely used.
Oh I'm extremely out of date, I was into ruby back when Why's guide was a thing. Maybe I'll revisit it someday if I ever get bored of go paying the rent.
The language is incredibly flexible and allows for "DSLs" that are just ruby libraries.
A simple example: `3.days.ago` is a very commonly used idiom in Rails projects. Under the hood, it extends the base Number class with `def days` to produce a duration and then extends duration with `def ago` to apply the duration to the current time.
`yyyy-mm-dd(datestr)` will parse a date str that matches yyyy-mm-dd format. It looks like a special DSL, but it's just Ruby. `dd(datestr)` produces a `DatePart`. Then it's just operator overloading on subtraction to capture the rest of the format and return the parsed date.
That library feels unnecessary, but the entire thing is 100 lines of code. The ease of bending the language to fit a use case led to a very rich ecosystem. The challenge is consistency and predictability, especially with a large team.
Not really a niche language. Fantastic web server development. A more flexible and powerful language than python—the metaprogramming can be ridiculously powerful (when done well)—without the nonsense of white space sensitivity. ActiveRecord is perhaps the best ORM out there. Rails has tons of functionality to get running.
Overall, a pleasant and expressive language with an incredible community. Python ends up "winning" because of pytorch + pandas, but is (imo) a worse language to work in + with.
...but ruby is whitespace sensitive too. It's hard to notice, because rules mostly follow intuition, but there're cases when not only a missing newline, but absense or addition of a space changes resulting syntax. Currently I remember only difference in parsing unary vs binary operators, like + and *, and ternary operator ? : vs : in symbols, but there're certainly more other cases.
Sure, like `a ?b :c` is nothing like `a ? b : c` (I guess the former is actually invalid), but that's obviously not what the previous message was referring to when speaking of Python which uses spaces as main facility to determine block scope.
People say frontend/backend parity, and that’s true, but I also remember there was a time in 2011 or so where single thread/async was this new hot thing.
Nginx was starting to get popular and overtake Apache on installs, and people were enamored with its performance and idea of “no blocking, ever” and “callbacks for everything”, which the nginx codebase sorta takes to the extreme. The c10k problem and all that.
When JavaScript got a good engine in v8, Node was lauded as this way to do what nginx was doing, but automatically and by default: you simply couldn’t write blocking code so waiting on I/O will never bottleneck your incoming connections. Maximum concurrency because your web server could go right back to serving the next request concurrently while any I/O was happening. But no “real” multithreading so you didn’t have to worry about mutexes or anything. I remember being slightly jealous of that as a Rails developer, because webrick/unicorn/etc had a worker pool and every worker could only handle one request at a time, and fixing that could only happen if everything was async, which it basically wasn’t.
JavaScript becoming a popular language in its own right due to frontend was certainly the most important factor, but it wasn’t the only one.
Not sure why this is considered a "classic" piece. It reads as if the author has just discovered the difference between preemptive vs cooperative scheduling, but hasn't yet found the words to describe his "discovery". Yes, you can write a `while(true){}` loop and block the event loop. That's not some damning indictment of Node. The point of Node is that you don't have to block on IO, so your program doesn't have to halt the entire world, and sit around doing nothing while you're waiting for a hard drive to spin or a network request to complete.
Heh, he's so right in every regard although I use Node.
Worst of all, they made npm packages dead easy, so most of them don't even have a readme file, not to mention inline docs like POD or RDoc. This is how you end up with spam pacakges, malware in npm and lpad disasters.
Given the popularity of Github, and the fact that a readme file is the first thing you see when pulling up a project on Github, most projects these days do in fact have readme files.
To add, front-end developers and other people that learned in Javascript (because a web browser is something everyone has, turns out it's a pretty great runtime environment, has live editing with dev tools, etc. It's honestly a fantastic way to 'get into programming) could write the icky backend code to make their slick websites, SPAs and games have internet-based savestate
Because it’s not as good. Why would I want two languages and two runtimes when I can just have one, all while delivering a demonstrably better user experience?
God forbid we reuse knowledge instead of drudging lives through never ending learning of same concepts with different syntax’s and 10x costs for supporting every special native snowflake toolchain.
To be fair, js interpreters are available out of the box in all digital devices out there that embed a browser. That's a huge deal, as far as portability is concerned.
Rails wants to be the UI framework, and a lot of devs didn't want to do server side UI and state, especially OOP style. So it was easier to do JS for your APIs, etc. DHH's opinions kind of made it an all or nothing choice for many folks.
I don't know but I still fully take the criticism: Google Trends is not an indicator of absolute usage, but (if anything) relative usage. It's not clear which of the two parent referred to.
Those are relative positions. We can't talk about a "nosedive" from that. It may be the case, but also maybe Ruby was just the slowest growing out of a number of languages growing in popularity. We don't have enough data from there.
https://news.ycombinator.com/item?id=36310130 - Rewriting the Ruby parser (2023-06-13, 176 comments)
I remember being taught to use yacc in our compiler course because "writing it by hand is too hard". But looks like Ruby joins the growing list of languages that have hand-written parsers, apparently working with generated parsers turned out to be even harder in the long run.
That said, replacing a ~16k line parse.y[1] with a 22k line prism.c[2] is a pretty bold move.
[1] https://github.com/ruby/ruby/blob/master/parse.y
[2] https://github.com/ruby/prism/blob/main/src/prism.c
reply