My initial experiment was encouraging. Boot time in development mode went from ~23sec to ~16sec, and I only enabled it for the main engine that comprises about 85% of our codebase so the real gains might be larger.
Looking forward to seeing what it can do in production mode - our boot times there are horrendous and it's a big deal for things like cron jobs. Thank you to all those who worked on this.
It would be great if we somehow had a way to load a module-as-file without unknown side-effects, and without depending so deeply on the other contents of the global namespace.
But this is basically describing a complete overhaul of most of what makes ruby ruby, so... ¯\_(ツ)_/¯
B = "c".freeze
raise "because I can, that's why"
# or even method_added, TracePoint, ...
You could say it was a separate language .rb-module or something, to make it formal.
I presume you mean "expressions that aren't method definitions in class bodies" (though that's a problem, because of things like attribute declarations) rather than "expressions in method bodies", since methods with no expressions in their bodies would be pointless.
But if you have source code that you know you will likely be requiring such as the standard library, you can do the initial parsing while compiling the Ruby VM, so you don't end up doing the parse twice at runtime.
Long term what we hope to do is to provide a build of the Ruby VM that includes the version of Rails you are using pre-parsed.
And then longer term we'd like to actually fully parse and initialise Rails (run the top level of the files which are loaded) during compilation, and freeze the heap and store it in the Ruby VM executable. When you run this special Ruby/Rails VM the Rails code is simply mmapped into your address space with all objects initialised and ready to go.
Obviously it'll require some tweaking to delay doing this like starting the web server so that doesn't get run during compile time.
It's a really interesting idea to pre-load the heap with just a set of libraries, which wouldn't be subject to as much change.
This definitely looks interesting. Boot times for majestic monoliths is a pain that I've experienced many times.
How does the fit in with zeus and/or spring?
How similar is this to bootscale? (https://github.com/byroot/bootscale) Or rails-dev-boost? (https://github.com/thedarkone/rails-dev-boost)
Two orthogonal optimizations: It definitely plays nice with spring, speeding up the pre-fork rather a lot, and the post-fork a little bit. I can't think of any reason it wouldn't also work with zeus, but I haven't tried it.
> How similar is this to bootscale?
The load-path-caching features are a minor evolution of bootscale. The major difference is that the caching is a little more aggressive in order to be confident enough to return definitive negative results when it thinks a feature is non-present on the load path.
I remember using rails-dev-boost years ago, but I can't really remember what it does (EDIT: Should be a similar story to Spring -- complementary optimizations)
How is that possible? Or you mean Ruby before Rails?
I don’t remember if Tobi was involved in the Ruby community in late 2004 when DHH introduced Rails at RubyConf 2004 (in DC), and I don’t remember him being at that conference, either.
But I do remember seeing Tobi involved in ruby-talk by early 2005.
But Shopify has always been a Rails shop.
remote: ! Could not detect rake tasks
remote: ! ensure you can run `$ bundle exec rake -P` against your app
remote: ! and using the production group of your Gemfile.
remote: ! rake aborted!
remote: ! Errno::ENOSPC: No space left on device
Startup time was one reason we started migrating away from Rails in a previous workplace, between frustrating startup time in development and test and occasional quirkiness of zeus and spring. Bootsnap would have been a godsend.
Zeus is capable of detecting any sort of invalidating file change, but is pretty buggy (or at least was historically -- the Stripe guys improved it a lot after I stopped working on it).
Does anyone know of a good solution for prebuilt, relocatable Rubies on macOS that I could easily bundle with my tool? I'm reluctant to use Homebrew or another package manager like rbenv, where I'd have to implement a non-trivial bootstrap process. Phusion's travelling-ruby project would be perfect, but it's unmaintained.
I just want my CLI to boot in 0.05s without needing to change languages. Love Ruby, but getting decent perf takes a bit of effort.
Out of curiosity, what's your tool(s)?
We get around the 'sudo gem' problem by distributing our tool as a git repo, then bootstrapping a vendored install of Bundler to manage our own little gem path using /usr/bin/ruby. We take care to remove most ruby-related env vars during init so we're safe from whatever crazy RBENV or RVM shenanigans are happening on the system. This setup works fine, but we don't get recent language perf improvements since we use system Ruby.
Like you, we distribute it as a git repo. We don't use bootsnap with it, but we have a few strategies that give us reasonable times:
$ time ./bin/dev help up >/dev/null
0.07s user 0.03s system 98% cpu 0.102 total
* Autoload everything. Our toplevel lib/dev.rb file is a whole-namespace autoload registry. Only a few other constants are defined there. Everything is loaded just by cascading through autoloads.
* Defer stdlib requires: We load most stdlib features within the method body from which they're used. Several stdlib features take a surprisingly long time to load.
- (internal http client)
- (internal package manager)
Doing this saved about 40% of our CLI boot time:
$ time /usr/local/bin/airlab > /dev/null
/usr/local/bin/airlab > /dev/null 0.27s user 0.15s system 79% cpu 0.521 total
$ git co jake--no-rubygems
$ time /usr/local/bin/airlab > /dev/null
/usr/local/bin/airlab > /dev/null 0.24s user 0.08s system 98% cpu 0.326 total
A Python project I work on has 279,124 lines of code and boots up in 2.5 seconds.
Without downloading it, all I can find is Discourse had 60,000 lines of code 3 years ago . Assuming as an extreme estimate they tripled their code size in 3 years, we have 180,000 LOC taking 6 seconds to boot up according to the article.
Is this normal for Ruby? Is the author using a spinning disk drive rather than an SSD?
you can watch it here: https://www.youtube.com/watch?v=_bDRR_zfmSk
I'm not sure about those stats you posted from 3 years ago since they aren't using the same `rake stats` numbers that are built in to Rails. Discourse's Rails app is currently 63k SLOC not including tests.
On my relatively fast computer booting takes 4s without bootsnap and 2.5s with it, which is a nice quality of life improvement.
I think it's safe to assume the author using reasonable SSDs on a Macbook Pro, given that the iseq cache targets only macOS.
I'm not 100 percent sure if third party modules for background tasks get loaded and the same time but they aren't part of my line of code count.
* MacOS Sierra
* 2.6 GHz Intel Core i7
* 16GB 2133 MHz LPDDR3
* 500GB SSD, whatever Apple ships.
Starting benchmark time: 13.05 seconds.
With load_path_cache: 10.01 seconds
Sadly, with compile_cache on I'm getting an error. /vendor/bundle/ruby/2.3.0/gems/bootsnap-0.2.14/lib/bootsnap/compile_cache/iseq.rb:30:in `fetch': No space left on device (Errno::ENOSPC)
Any ideas on what causes this?
Practically speaking, the compilation caching features are not supported on linux. Eventually we'll change the cache backend or add a different one that does work on linux.
From a language standpoint: Ruby emphasizes developer happiness at the expense of some things, like performance/concurrency for example.
From a career standpoint: There is a lot of ruby in the world today. There will be lots of applications to maintain as the years go on, which is +1 from a career perspective. Lots of people will also continue to write new ruby software, because it's effective and easy to be productive in.
All the languages you mentioned + ruby are all good languages to learn for various reasons. All have their weaknesses and strengths. None of them are an effective hammer for every nail you'll encounter.
If you live in the west coast, certainly. Anywhere else in the world absolutely not.
> Ruby emphasizes developer happiness at the expense of some things,
That's a strange statement. Plenty of developers enjoy writing PHP, or C++ or even Java. Ruby doesn't make developers more happy, by no serious metrics.
Ruby had its shot but wasted it because of the petulance, the arrogance, the immaturity and the toxicity of its community.
> None of them are an effective hammer for every nail you'll encounter.
Ruby (in fact Rails since that's really what it is all about) is clearly redundant in the era of light weight servers and thick clients.
Ruby is the language for the second most popular HTTP MVC framework (Rails) and the first most common popular tool (Chef).
There's a decent demand for Ruby developers worldwide. I worked in New Zealand before getting paid relocation to Germany as a Ruby dev and I'm still getting daily "come to London" LinkedIn spam for Ruby work.
"Developer happiness" is how Matz describes minimising friction between the developer and the language. Ruby aims to provide abstractions with the lowest possible cognitive overhead. Contrast this with Rust which aims to provide abstractions with the lowest possible performance overhead. Everything is a trade-off.
Ruby's stagnation has nothing to do with the community. Improving Ruby performance is extremely difficult because of the sheer flexibility of the language. It doesn't help that until fairly recently no one got paid to work on Ruby. Now Heroku are paying several of the core team.
The IBM OMR project is working on bringing a companion JIT to standard Ruby. Oracle are working on a ground up re-implementation of Ruby using a new technology called Truffle + Graal for implementing languages on top of the JVM with performance which will be on par with Go.
Bootscale should work, and the load-path-caching feature of bootsnap should work too, if you can get the gem to install.
It is definitely a net positive for us. YMMV, of course.
I wonder if in 2019 I'll be seeing "In 2019 there is really no reason to define a micro-services architecture".
The pendulum it keeps on swinging.