Now everyone can know the joys and horrors of Bundler! ;) I kid. This announcement is very exciting for people tinkering with Rust like myself.
I really like Go, but the lack of a solid package management solution that I like using is a downer. Before you rage-comment: I know how Go packages work. I know you think they're better than anything that's ever been invented. I know there are solutions out there for some portions of what I want. But nothing has been created that really works for me (yet). So, to see this development in another one of my "tinker with it but not build a ton of production quality code just yet" languages is exciting!
> I really like Go, but the lack of a solid package management solution that I like using is a downer. Before you rage-comment: I know how Go packages work. I know you think they're better than anything that's ever been invented. I know there are solutions out there for some portions of what I want. But nothing has been created that really works for me (yet).
What is it that you do want in a Go "package manager", then?
You say that you know how Go packages work, so I'm assuming it's not one of the typical misunderstandings of newcomers from the Python/Ruby/Node.js world (all 1.x code is forwards-compatible with no modifications, static linking means makes specifying versions of packages less relevant, the package/filesystem layout parallel means that vendoring project-specific forks of packages is relatively straightforward once you know how, etc.)
I put "package manager" in scare quotation marks because one of the design goals of Go is essentially to render most of the functionality of such package managers irrelevant.
I say this as someone who has been writing Go for work on a daily basis for over a year and half: while Go's package system isn't perfect, it's pretty damn good, and I haven't felt the need for a package manager at all ever since I learned the way things worked.
I put "package manager" in scare quotation marks because one of the design goals of Go is essentially to render most of the functionality of such package managers irrelevant.
Golang does have a de-facto package manager in go get (with its own conventions on file system layout etc), it just doesn't handle versioning. The Go packaging system is pretty good, but it isn't perfect, and I'm not convinced vendoring is a viable solution if you're going to be sharing code and collaborating on several open source projects - it's a recipe for lots of incompatible versions with security and other bugs unfixed floating around in different projects, and a lot of confusion from newcomers about the required version of a library because it is not specified anywhere.
The current approach works perfectly if you are working in-house, willing to do your own dependency resolution, and are happy to vendor your own dependencies - it's completely reasonable within one company or org to do this. However it fails in more collaborative or open-source environments like libraries shared on github depending on other libraries shared on github. I do think they need version pinning for that (in fact they have version pinning for the language in go get, just not for individual packages), it wouldn't be a large change.
For example, in the recent Stripe CTF, people ended up downloading and building different packages locally than those available on the server builds, because there was no clarity on which version was included. That was kind of annoying. It could be solved by vendoring, but then you have other issues in the long-term with relying on other people to keep their pkg dependencies up to date or you can be including two conflicting versions of a third pkg.
They've taken an interesting approach in trying to eschew versions (which avoids some problems with dependency resolution), but I'm not sure it is sustainable long-term. Didn't they also try to make golang itself version free initially?
Well, Rust also uses static linking by default and crates are also laid out based on the filesystem [1]. But we still feel the need for a package manager...
[1]: You can nest subdirectories within the package directory if you wish, to provide more fine-grained namespacing, but all source files belonging to a Rust crate must be descendants of one directory (in contrast to Go, in which the source files must be children of one directory).
I remember reading http://golang.org/cmd/go/ (specifically, the part about the <meta> tag) a while back and thinking "well, that'd be pretty straightforward to get working" and then (sadly) never having the time for it :(.
Thanks for the link. Just tried it out, nice, and I love the use of semver. So you can do this to import a specific version of say yaml
import "gopkg.in/v1/yaml"
or for an arbitrary url on github
https://gopkg.in/username/v1.0.0/pkg
Weird that versions are before the pkg name though, that doesn't seem right...
Here's hoping this sort of versioning scheme makes it into the go toolchain eventually. The only downside of this solution is it redirects only to github and guesses urls, I quite like that the go toolchain is indifferent as to where the sources are hosted. This would be very easy to add to the toolchain though; all they'd need to do is recognise imports like this (personally I prefer versions at the end of current urls, not in the middle):
import "bitbucket.org/user/pkg/v1.1.2"
in go get and download that tag to that path, no other changes in the toolchain would be required, and those who ignore versions could continue to do so, while people who wanted them use the version specific imports. I think the objection of the go team was that this doesn't solve any of the complex dependency issues that plague package managers, but it'd be nice just to be able to specify which version is expected for future users and have predictable builds when sharing code outside one org.
I actually haven't even used gopkg.in myself. Gustavo Niemeyer just released it and has been pushing it as one way to promote API stability. I agree with him and plan to migrate to the scheme soon, as there are a few people using my libraries.
With that said, I doubt very much it will make it into the official toolchain. There's really no point if it can be offered just as well by a third party. For example, godoc.org has proved invaluable but isn't official.
Also, keep in mind that gopkg.in is still pretty new, and I believe Gustavo has said that adding support for other web sites or revision control systems is perfectly doable. Just wanted to get something started for the lowest common denominator, I suspect. :-)
The advantage of having it in go get would be that no one has to rely on a single point of failure like gopkg.in, nor would they have to get new urls recognised there, and finally that it could become an accepted way to version dependencies instead of the versionless future golang lives in right now. They did add versioning to the lang after a while so hopefully this will be similar. It's not a huge deal anyway right now, it'll become more important as the pkg universe grows.
> static linking means makes specifying versions of packages less relevant
I'd really like to know why you think this is the case. I don't specify versions of dependencies because of dynamic linking, I specify versions of dependencies (wherever possible) so I know I can still build the thing later.
An old binary of a go program is no use to me as a developer.
I think he's referring to the fact that you won't try to run some executable that depends on Ruby 1.9.3 and SomeGem 2.9, but you're on your rbenv that is Ruby 2.0 and SomeGem 3.0.1.
I'm not a Ruby developer, but I know this scenario has caused me considerable trouble just trying to use things built with Ruby.
This doesn't work when you have more complex dependencies, because they in turn have dependencies, which may clash. You have no way of knowing what pkg c version packages a and b expect when you include a and b.
If the dependencies clash, it doesn't matter what system you use. There isn't a package manager good enough to fix the issue of having conflicting dependencies.
Vendoring or cloning (with explicit upstream merging) is again your best bet for dealing with this as it the only way you have complete control of the dependencies involved.
If you are saying something like having a package X in your vendor lib that depends on packages Y and Z which you don't have vendored... then you haven't done it right. You vendor all dependencies.
I agree vendoring is essential in some situations, mostly in deployment to production or keeping tight control of libraries used in a project, but I disagree that versioning is not useful in a packaging system.
If you are saying something like having a package X in your vendor lib that depends on packages Y and Z which you don't have vendored... then you haven't done it right. You vendor all dependencies.
No, I'm saying imagine you have an app, which depends on pkgs a and b. a depends on c1.1, b depends on c1.5, but this is documented nowhere in the code because the import statements simply say import "c". This is the default and encouraged behaviour currently with golang and go get. go get a fetches the latest of c.
You then vendor a,b,c and happily compile your app, which doesn't work because the version of c you got when you vendored was 2.1, which neither a nor b was written against. Maybe your app will fail to build and you fix it, maybe your app will build but be wrong in subtle ways (say an enum value changed), without explicit versioning you have no idea really.
If a and b had a vendored c, you have an even more messy situation as security bugs exist in c1.1 and c1.5, but you don't even know which version you have if you pull in the ac and bc and compile against them, or if you can safely use two versions of c at once, your code uses c2.1 too, etc, etc.
Being explicit about versions is helpful for resolving dependency conflicts because it makes it explicit which version was expected at the time of writing, it doesn't magically solve all conflicts, no-one has this completely worked out, but it helps to be as explicit as possible when including other projects, and it is helpful in keeping up to date libraries whilst easily including them, even if you vendor.
If you have a large, open ecosystem of constantly evolving libraries, which I think golang should aspire to, it is useful to version them (just as golang itself is versioned).
Dependencies clashing is not always a binary thing. If they're well-specified, there may be a way to meet everything's version specifications.
Anyways, this isn't just academic. Tools to enable you to specify dependencies without vendoring them have been developed and used in several different environments, and at no point in using any of them have I gone "Gosh I wish I could go back to cluttering up my version control with other people's code!"
> If the dependencies clash, it doesn't matter what system you use. There isn't a package manager good enough to fix the issue of having conflicting dependencies.
With NPM, it's fine as long as the two modules aren't calling into each other with improper data as a result of the dependency clash (because each dependency is responsible for installing its own subdependencies, rather than relying on a global namespace/install location). So, you'd have a problem if A relies on C1, and B relies on C2, and you got data in format c1 in A and called into B which expected format c2. Otherwise, you're fine.
There are some people trying to remedy this issue. While the Golang designers don't feel the need to include it in the language, others may resolve this. Id recommend reading the goals doc, as they seem to have a good starting point.
Handling packages in Go is always something that stops me from going any further when I look at it. Every time I realize I won't be able to properly specify dependencies with versions or pin them (like what Bundler does) it drives me away screaming. It's only the last few years dependency hell has been resolved in other environments (like Ruby and Node, to a lesser and sometimes weirder extent Python), I don't really want to go back in time any more.
Coming from Ruby, when I last looked at Go about 5 months ago, I was dismayed at the package management state. My specific concern was around version pinning. I'll be excited to try out Cargo when it launches.
Yehuda's taking on another major software project before Rails.app [0] / Tokaido [1] is released? It's been 22 months since it raised twice its initial goal.
@molecule Fair question. Tokaido ended up taking far longer than I expected (a big, huge mea culpa for sure). TL;DR: At long last, I plan to ship Tokaido, along with a website, documentation and automation at RailsConf this year. That wouldn't be the first time I said such a thing, but the feature set is actually done (and working on many test machines).
I dedicated a bunch of medium-time months at the beginning of the project to getting the initial set of functionality done:
* Figuring out how to statically build Ruby, which has required non-trivial updates with every version of Ruby, and a bunch of work to get some of those fixes upstream. Many of the projects that bundle Sass or Compass in a pretty GUI are making use of this initial work.
* Building a number of OSS libraries (https://github.com/tokaido) that enabled an application-isolated workflow like Pow with many fewer failure modes.
* Integration with popular tools that people use in development (redis-server and Postgres.app)
This process took longer than I expected (a year, rather than more like six months), and at the end of it, I had run out of work-time time to devote to the project. I turned my attention to nurturing a small community of people who could make Tokaido an open-source project that could be community maintained once we ship.
Andrés Robalino, in particular, helped me squash a number of important bugs that I was having trouble with (including a bug that was triggering occasional 100% CPU usage on some machines), and has cleaned up the process of building static Ruby and added a bunch of polish to the UI.
Tokaido has definitely not been my finest, quickest open-source turnaround, and it's fair for people who don't like me to use it as an example of my failures. That said, I think people who follow me know that I have committed my heart and soul to many more projects than Tokaido, many of which have large communities of people who love them.
Not everybody needs to love my projects or my style. I should have shipped Tokaido earlier, and for that I am sorry. That said, I am committed to finishing the job with Tokaido, and believe that the other major projects I have worked on speak for themselves.
One issue I've run into (which may simply be ignorance) is the time consuming 'bundle install'. For example, I've updated some gems and run 'bundle update' on my development machine. I push that to the CI server, which must run 'bundle install' to install any new gems. But this seems to trigger the (NP-complete, last I checked [1]) dependency resolver yet again, even though I've already run it locally and my Gemfile.lock is newer than my Gemfile. Is there a way to skip it? Perhaps bundle just-blindly-install-the-gems-listed-in-Gemfile.lock?
Bundler in general is a nice tool, and Bundler + shared pile of gems is a nicer workflow than gemsets (though I am not a fan of "remembered options"; I'd rather make my own config file than forget that Bundler "remembered" my experimenting).
The more I think about Bundler, the more I wish that general purpose tools like Nix Package Manager [2] got more love. I've been playing around with using Bundler for the dependency resolution step, then Nix for the packaging and deployment steps (each app might have a separate Nix profile where it could install its particular set of gem versions), but there are missing pieces.
Yup. This has gotten way faster, recent versions of Bundler include a -j. Also, the bundler-api project has been steadily working on making this faster. Without getting into the details, let's say the gems format and API does not make doing this efficiently easy.
> But this seems to trigger the (NP-complete, last I checked [1]) dependency resolver yet again,
It does not. It _does_ check to ensure that what's in the Gemfile matches what's in the Gemfile.lock, though.
It's been a while (almost 2 years) and I don't use ruby much any more, so my memory is a little hazy and for all I know the issues have been fixed. My main complaints were that too much magic made it difficult to tell what was being loaded from where; and that it was a ton of work to get proper automated deployments set up (with puppet...in a way that didn't make assumptions about the target machine - RVM was also a huge culprit here). Also, at times it was painfully slow. To be fair I'm not super familiar with the internals - it's totally possible that it's all ruby's fault and any ruby package manager would have the same issues. For every day development bundler was mostly great - it was just painful to get proper deployments working.
Things have changed quite a bit in the last two years. Especially around speed. So things are a lot better now in Bundler land, and should be better with Cargo, too. Cool.
> I much prefer how npm handles things.
Can you elaborate on how npm handles things in a better way?
Fundamentally, I think it boils down to the fact that node's module loader is dumb, and npm doesn't try to be a smart module loader on top of that - it just installs dependencies to where node's dumb loader will be able to find them. And, because of CommonJS, it's easy to tell it exactly where it's allowed to look.
One way this manifests itself in a nice way is with local installs - let's say you have a package that depends on modules A, B, and C. Each of those modules has its own dependencies. The default behavior when you install the package is that A, B, and C have copies of all their dependencies (so if all 3 depend on D, you get 3 copies of D), and so forth all the way down. This does waste disk space, but for the most part node modules are small and disk space is cheap. The advantage is that you don't have to worry about conflicting dependencies, or subtle changes breaking things in hard-to-find ways.
Like a sibling commenter said, I have no idea if this model will transfer well to Rust.
For one, I love local by default. Having a package manager that deals with a global context makes things considerably more complex. Having everything local allows you to isolate one crate's (if we're speaking in Rust terms) dependencies to another crates' dependencies.
NPM also has the ability to have isolated dependencies from each other dependency. For example, module A can pin module B to version 2, whereas module C can pin module B to version 3. So you have independent copies.
Now, that worked for a dynamic language where you don't have to deal with static/dynamic linking. Would it be appropriate to statically link two modules that are the same, but at different versions? Maybe not. That would lead to massive binary sizes.
Roger. It's hard to articulate my own thoughts about this, but I feel like Bundler enables the best of both worlds here.[1] That said, it's obviously a preference.
> NPM also has the ability to have isolated dependencies from each other dependency.
Ahh yes, I've heard about this. In Ruby, it's not really feasible due to the language, and I'm a bit skeptical that it doesn't lead to super extra complexity, but then again, I haven't used it myself...
Totally agree regarding static/dynamic linking. Though isn't it the same thing with NPM: you still have two copies of the library.
> I'm a bit skeptical that [isolated dependencies] doesn't lead to super extra complexity...
It's braindead simple. It's an emergent property of the way node loads modules, which is also pretty darn simple.
When you give Node a package to require, if it doesn't refer to a relative or absolute path, it walks up the directory structure looking for `node_modules`. When it finds one, it looks inside for the module you asked for. If it doesn't find it, it keeps walking up the directory tree.
So you can have multiple module repositories in a directory tree. A given file will always find the one that's "closest."
Npm takes advantage of this by installing every module's dependencies in a `node_modules` directory in the root directory of that module. Subdependencies' dependencies go in their node_modules directory. Voila! Isolated dependencies, braindead simple, albeit with a crazy deep directory structure. (node_modules/foo/node_modules/bar/node_modules/baz...)
Node/npm has the best dependency handling I've yet seen, and I think it's because it doesn't try to outsmart me or go for anachronistic disk space optimizations.
To be fair though, this works because loading a symbol foo in an isolated scope in javascript is a safe operation.
If you were trying to namespace each of those symbols as you went because of a global symbol table (>_> ... c), then this would be a much more complex task.
It's important to realize that rust libraries are c libraries, complete with symbol information; Rust does mangle symbol names per crate by default, but I'm not 100% sure would stop you from having some issues where the public symbols in a crate caused global symbol table conflicts.
when everything has it's own version of a dependency don't you risk to get into a condition where you get e.g. incompatible objects Lib-v1.Foo and Lib-v2.Foo ?
> I'm not sure what's anachronistic about efficient use of disk space
Consider this: A frequently used module in one of my trees is "glob". It's repeated four times in two different versions. It takes 207K each time, including subdependencies. The wasted space is 414K... more than a whole floppy! ;-)
Or to put it another way, I've wasted 0.00017% of my rather small 250GB SSD.
Optimizing that is anachronistic.
> that's nothing that can't be solved via hardlink deduplication
Yep, and `npm dedupe` [1] does something similar. This does have the potential to become massively complex, though. You have to re-dupe when one dependency upgrades but another doesn't, and you also need to deal with the fact that two modules' dependencies may be sharing memory space that a module author was expecting to have to herself. (Modules are cached, so changes to "module global" variables are shared, but modules loaded from different locations are cached independently.)
On the other hand, my Java .m2 directory counts 2469 jar files for a total of 2.4G. My considerably smaller .cabal is still 253MB. I won't complain if hardlinking prevents package repo sizes from going out of hands.
> Yep, and `npm dedupe` [1] does something similar.
Not quite. From what I understand, it attempts to manipulate the package hierarchy. On the other hand, hard link deduplication doesn't need to do any such thing, it simply needs to hardlink files with the same checksum. No intelligence required. Some time ago, I used rsync to do this kind of thing when deploying large binary dependencies, and it is very practical (IMHO, hard links get too little love).
> NPM also has the ability to have isolated dependencies from each other dependency. For example, module A can pin module B to version 2, whereas module C can pin module B to version 3. So you have independent copies.
That seems like a bad idea even in a dynamic language, perhaps especially in a dynamic language. What if you return a type from module A that is defined in module BV2 to module C suddenly it can't find the method(or the method does something different then expected). At least in a static language you could prevent this to some extent. It seems like it would be better to have version ranges and fetch the newest released version or fail if no version is acceptable with interactive/cmdline overrides.
That's what made it intolerable when I started getting into Ruby years ago. Perhaps this can also be attributed to Ruby being slow, but it was (and still is) too slow to install anything or run commands. This isn't bundler's fault, though, as gem and many ruby CLIs also run super slowly.
My pet peeve with Bundler (and Rubygems) is the explosion in stat calls resulting from having to look in a larger and larger set of directories for each loaded Gem as your Gemfile grows.
For one of my larger Ruby apps I resorted to saving a copy of $LOAD_PATH before initializing Bundler, and then using some hacks to reset $LOAD_PATH to the basic load path + the results of grep'ing the "post Bundler" load path for each of the relevant gems and its dependencies, so that each require only worked on the minimal required $LOAD_PATH. The result cut down the number of stat calls from 100k+ on startup to <20k without any other changes, and cut tens of seconds of the startup time....
The stuff I did is a giant hack, but frankly Bundler needs serious work there - for almost all Ruby code I've worked on, poor load path handling in Rubygems and Bundler accounts for 90%+ of startup time.
Yeah, strace/dtrace/truss/whatever is your friend, but it seems like it is probably alien to most Ruby developers given how many obvious performance issues you can spot with it (then again, most developers in general seemingly have never bothered to look at a trace of their apps)
I don't think Bundler and Rubygems' path handling would ever have been written and released in the state they are now if someone had seriously looked at the actual system calls it generates for any app with dependencies on more than 2-3 gems....
Especially because there's a simple-ish solution: create manifests of files for each gem, load them, and check a hash of the combined set of files first, and fall back to a much trimmed down load path if the file isn't from any of the specified gems. Bundler could even easily create an aggregate manifest of all the gems the app depends on to make the check very cheap.
I keep meaning to try to hack something together, but for now I have something half-assed that cuts enough seconds of my app start times that I haven't been able to prioritise it.
I can’t speak for the GP, but I have wasted a good number of hours on problems relating to bundler installing/upgrading global gems. There is --path, which solves this, but it seems to me that "--path vendor" (or something similar) should have been the default, similar to how npm will never touch anything outside your project directory without being explicitly asked to do so.
Here's a non-comprehensive list of issues I have with bundler. Don't get me wrong, I think in many ways it's a lot better than what we had before. But I also don't believe it's advanced much since its initial debut and think we can do better (and yes, I've tried on that front):
* You can't override a dependency:
This may very well be an issue with rubygems, but I'd have hoped bundler adoption would have fixed it if that's the case. Since dependencies and their versions cannot be overridden, transitive dependency conflicts are a constant minefield. The naive solution is to modify your gemspec to be overly permissive and I get pressure to do this frequently with gems I work on. But, now I'm asserting that my gem works with some hypothetical dependency that hasn't been release yet. And I've watched that blow up many times. Semantic versioning does not fix this.
* Dependencies can disappear on you:
Yanked gems irk me, but I appreciate their value in preventing the installation of an unwanted version when one isn't specified. However, it also explicitly blocks installation of a particular version, effectively ignoring the version specification in Gemfile. I may be romanticizing things here, but I don't recall ever seeing a non-SNAPSHOT published artifact being removed when I was working with maven or ivy.
What's frustrating is the gem still exists, but not in the index. I guess I would expect Bundler to realize I do want to install the version I've specified and fetch the file and do the installation. Ultimate control in my dependency graph should rest with me, not the whim of an upstream developer. I realize this is a rubygems.org thing, but it also strikes me as a solvable problem that many don't see as a problem. And it means that checking out historical copies of my app almost certainly will not run without changing the Gemfile(.lock), which seems like a core value in a dependency management tool.
In any event, you will only discover this when deploying to a machine that has come up since the gem was yanked. Common staging server and CI strategies don't catch this since it's essentially a race condition in the gem ecosystem. You simply won't find out until your deploy fails. It undermines any trust in the ecosystem and the only viable solutions are: 1) run your own gem server; 2) modify gemcutter to dismiss all yanks, or 3) vendor every gem your app uses.
will not. Apparently the dependency graph can't be resolved if the gem names aren't unique. The platform part isn't taken into consideration from what I gather. This made it really hard when I was trying to port some C ext. gems to a JRuby equivalent.
* It promotes bad practices:
This one is admittedly contentious, but given bundler was created basically for Rails 3 and Rails is the worst abuser, it's also hard to divorce the two. But requiring your entire dependency graph up front is just not a good idea. It's bad for performance and it's bad for memory. It's why apps take 30s to boot. Most non-trivial Rails apps I've come across are basically multi-tiered applications in a monolithic codebase. That means fog is getting loaded in controllers and haml is getting loaded in Sidekiq jobs. It's just a very odd situation. I've tried to defer loading, but this is a battle that's hardly worth fighting because of railties. Auditing every gem to see if it has a railtie is tiresome and invalidated as soon as a new version comes out. If a gem has a railtie, it needs to be required at a very specific point in the Rails boot cycle, otherwise your app just won't work the same. Figuring out that difference will likely drive you insane.
Likewise, every new gem created from Bundler has a gemspec that shells out to git at least once. So, if you have a dependency on a gem sourced from git in your Gemfile, you get to pay that cost on every app load. And if you end up somehow getting different results on different machines, then you're not really running the same thing, which seems odd to me for a dependency management tool.
* It's slow:
Gem installation has gotten a lot faster since the early days. And dependency resolution has gotten better as well. So, I'm not saying no work is being done here, but it is still slow. This is a complaint that is always levied at maven, too, so Bundler certainly isn't unique in this regard and I'd argue it got faster in a much shorter timeframe than maven did, so that's promising.
* It was designed for MRI:
This one may be unfair since I don't use rbx, but bundler wasn't really built with JRuby in mind. "bundle exec" is used everywhere now and it forks a process. Forking the JVM is anything but light. The solution is to use binstubs, which avoid the forking. But also means you can't really use both JRuby and MRI in the same codebase.
I really hope that Cargo learns from some of the pitfalls of Bundler. It seemed like Bundler didn't really learn from the pitfalls of other package managers. I wasn't involved with any of the design decisions, so I certainly don't want to say it was pulled together haphazardly. But it also seemed to overlook problems that tools like maven had solved over the past decade. I'm sure a fair bit of that had to do with the underlying rubygems system, but the distinction is also a bit moot from the user perspective.
Totally agree. At Snowplow, we have had far more problems supporting user-deployments of two in-theory simple Bundler-wrapped Ruby CLI apps than with deploying _the entire rest of our stack_. The Ruby packaging, deployment and environment ecosystem is an absolute mess - we are going to move these two apps to JRuby-Warbler, which is the only sane take on application packaging in the whole Ruby ecosystem. It's a real shame that Mozilla couldn't get people from npm or Lein, heck even SBT, involved.
I am very skeptical about this. Every language which needs a "package manager" tends to make building standalone applications painful.
I do think C++ is in dire need of being replaced, it is unsafe at any speed and full of legacy cruft. Unfortunately D jumped on the GC train and had other serious issues as well, allowing C++ to survive that attempt without a scratch.
Rust finally did the right thing, aiming for the same zero overhead/you only pay for what you use design which made C++ such a success.
However, C++ is a platform agnostic language, while Rust thus far has only supported Linux as a first class platform. I find the attitude of the Rust devs ("we will improve Windows support once the language is stable") deeply misguided. It meant that they largely missed out on feedback from Windows developers during the development of the language. "Windows developers" also means all the AAA PC(+console) game developers (the most diehard C++ users). Linux still is not a relevant platform there.
And the Rust developers seem to continue to go down that rabbit hole by enlisting Ruby developers to develop a "package manager". As a Windows guy the very word makes me cringe. How is that thing going to integrate with Visual Studio and other Windows specific concerns?
Ruby only really works on Linux, first advice you get as a Windows guy wanting to learn Ruby is "Install Linux, if you try to do Ruby development on Windows you are in for a world of pain". I do not trust any Ruby developer to write portable software, they are married to the GNU/Linux ecosystem. Would you expect Microsoft guys to develop something which actually works well on Linux?
Also remember that C++ does not have a "package manager". Some of the most complex and massive applications in the world are written in C++, yet you do not hear many C++ developers crying "When will we finally get a package manager?". It is not even on the agenda. C does not have one either.
"Package managers" are a Linux-ism, they deliver a certain UX you may or may not like (personally I hate it with passion), but they should not be part of a platform agnostic programming language. A package manager may belong to a Linux development environment for Rust, but the language itself and its library handling should be completely independent of it. Rust libraries should work just like C++ libraries so that they do integrate well with other development environments.
I see Rust becoming a new OCaml, utterly Linux-centric and thus leaving C++ as the sole competitor in the maximal performance + high-level abstractions + platform agnostic category.
For the sake of games no longer crashing randomly because of memory corruption bugs: change course now.
Windows is a first-class platform for Rust. Windows is Firefox's most important platform, and Servo needs to demonstrate that opportunities for optimization that could be relevant to Gecko are applicable to browsers running on the Windows platform.
If Rust seems like it's less well-integrated into Windows than it is into Linux and OSX, it's because none of the Windows developers who keep complaining about Windows support seem to be willing to step up to the plate. I've made this same offer several times to self-proclaimed Windows devs on HN: if you want Rust to get better on Windows sooner, make it happen sooner. Until then, the Rust devteam will spend their resources finalizing the language itself, while the Linux and OSX and FreeBSD and Android/ARM contributors continue to provide better integration with their chosen platforms.
This is not hostility. Please help us. We know that Windows is important. We need experts.
To be fair, the gaps between linux, osx, freebsd, and android are like a creek to the oceanic gap between any of them and windows support that most C++ windows devs I know (who are largely professional game devs) will consider useful. You aren't asking them to tweak a few things here and there to make a build go, you're asking them to make it fit a completely different system. You may find a volunteer willing to make that happen, but I don't think telling them they're all lazy bums compared to that guy who manages the freebsd port is going to get them there.
I also really want to be clear, as someone who's a huge fan of rust (from a distance so far) and wants to see it succeed, those windows devs will never agree with you about it being first class so long as it requires mingw to build. I know it makes things a lot easier when porting, but if it doesn't play nice with VC++ they won't give it a second look.
> You aren't asking them to tweak a few things here and
> there to make a build go, you're asking them to make it
> fit a completely different system.
Precisely. Given this, is it such a surprise that Rust was initially written to use MinGW on Windows rather than MSVC back when it was an understaffed two-man project?
I admire what you're doing, but I find the assertion that Windows is a first-class Rust platform difficult to square with your repeated requests for more Windows volunteers to make that happen. Windows developers are everywhere. Why doesn't Mozilla hire some Windows experts for the Rust team?
Mozilla is continually hiring new people to work on Rust. They haven't yet hired any Windows experts because better Windows integration is merely a high priority, and not the highest priority (that would be stabilizing the design of the language itself).
Agreed. The message from Mozilla is that "Windows is really important to us, please come do it". Meanwhile they're paying for the development of yet another Unix-style package manager.
Clearly there is a list of priorities that guide these investments, and Windows support doesn't rank very high on that list.
> Meanwhile they're paying for the development of yet
> another Unix-style package manager.
A package manager that will be platform-independent, and thus benefit Windows devs as well. At Rust's current state it's hardly surprising that they've chosen to allocate resources to efforts that will benefit all platforms equally. Meanwhile, the community for every platform but Windows is chomping at the bit to submit improvements. No doubt they're still holding out for the Windows community to do the same, without having to resort to hiring a dev specifically for Windows. But it will happen, if it has to.
You should really put more effort to get Rust work smooth on Windows withouth counting on volunteers.
Think of MySQL vs Postgres adoption. I belive that at the certain moment the primary reason that all Apache/PHP stack devs were choosing MySQL over Postgres was that MySQL was very easy installable on Windows (all these LAMP/WAMP packages that were creating ready-to-use dev environoment).
> However, C++ is a platform agnostic language, while Rust thus far has only supported Linux as a first class platform. I find the attitude of the Rust devs ("we will improve Windows support once the language is stable") deeply misguided. It meant that they largely missed out on feedback from Windows developers during the development of the language. "Windows developers" also means all the AAA PC(+console) game developers (the most diehard C++ users). Linux still is not a relevant platform there.
This is not correct. Mac OSX is very much a first class platform supported as well as (or better than) Linux, and the "wait until the language is stable" is not the attitude that is being taken: firstly, every single changeset is required to pass tests on Windows to be merged (i.e. Windows is being regarded as first class too, even if there's a few lacking areas), and secondly, people are working on it, although slowly, such as the removal of the dependency on GCC's C++ runtime.
> A package manager may belong to a Linux development environment for Rust, but the language itself and its library handling should be completely independent of it. Rust libraries should work just like C++ libraries so that they do integrate well with other development environments.
I don't see this changing particularly. AIUI, cargo is more designed to be a (pluggable) dependency manager, exposing as much of its internals as possible (via CLIs, i.e. callable binaries) so that external tooling can hook into it in a sane way.
>every single changeset is required to pass tests on Windows
That alone does not make Windows a first class platform, and the attitude I quoted was exactly what I got from the developers when I pointed out how relatively complex it was to get Rust working on Windows. Note the past tense, that boat has sailed, while Rust was still in heavy development you needed some serious dedication or an uncommon skill set just to build Hello World on Windows.
Click on setup.exe, wait for the installer to finish, click on "Rust command prompt" - that is how it should have worked. That is what first class support for Windows during development would have looked like, not fiddling around with ports of Linux tools.
>Click on setup.exe, wait for the installer to finish, click on "Rust command prompt"
You're wanting things that don't even exist in Linux land. We still have a multiple stage build process that can take hours (depending on your available processing power). If any pre-built Linux binaries are available from a trusted source, I couldn't find them.
The bottom line is, if you're horribly offended by Rust's lack of Windows support, and want to use the language, then pitch in and lend a hand. That's what people who want more embedded ("bare metal") support are doing (myself included, although I haven't yet open sourced my improvements).
After some initial stumbling blocks because of the pointer semantics, I've started using Rust regularly and it's becoming a very nice language. And it's the only new non-research language right now that actually has fast, deterministic performance ideal for real-time applications and games (Nimrod is close for soft real-time).
> Also remember that C++ does not have a "package manager". Some of the most complex and massive applications in the world are written in C++, yet you do not hear many C++ developers crying "When will we finally get a package manager?". It is not even on the agenda. C does not have one either.
So? C also doesn't have a module system. It doesn't mean it's not a desirable feature.
> "Package managers" are a Linux-ism, they deliver a certain UX you may or may not like (personally I hate it with passion)
Package managers and UX are orthogonal concepts. You're free to not like Linux package managers, but the development of nuget in Visual Studio shows that not everybody embraces the installer download "package management" with enthusiasm. You'll note that most modern (and not so modern) programming languages (Java, Python, Perl, Ruby, Haskell, PHP...) have found the need for one.
You are correct, it would be hard to live without NuGet when writing .NET applications.
The parent is really full of it. Having written several large C++ applications on Windows, a package manager would of been incredibly useful. Having to track down native dependencies and manage platforms/architectures is incredibly difficult and annoying without one.
If you write major C++ applications and do not desire a package manager either you have serious NiH syndrome and write everything yourself or just like wasting time.
>Isn't the Nuget package manager becoming increasing more popular on .Net / Windows environments?
I do not know how popular it is exactly (how can you?), but I am willing to bet money on the fact that the vast majority of Windows developers do not use it.
A large part of the linux|mac centering of the current development is that the lack of contributions from Windows developers. It's an open source project, and while we make sure it works on windows, more feedback and contribution from "Windows natives" would be fantastic.
I have never programmed on Windows, and wouldn't be able to gain the experience needed in time for a Rust 1.0 launch. Join the mailing list and IRC channel and offer feedback, if not patches!
Firefox, and Mozilla, the biggest users of rust are windows developers, and multiplatform C++ developers.
C++ has no packages, and making multiplatform code in C++ is hard. Very little code is reusable, especially outside of the platform in which it came in.
"For the sake of games no longer crashing randomly because of memory corruption bugs: change course now." You realise, this is one of the main design goals of rust?
Lots of people in the ruby, python, and node worlds use package managers on windows. Besides, windows is C#, and moving heavily towards tools such as node.js, and other multiplatform tools.
I think the fact that Windows devs are getting itchy to have full support is a good sign. It means that the language is becoming popular.
That said, I'd like to point out that the same thing happens all the time with languages developed in Windows land. I happen to really like F#, but Linux support for the language compared to Windows support has historically been relatively poor. It was only the heroic volunteer efforts of the mono devs that brought a respectable degree of support for F# to Linux.
And like mono, Rust is an open source project. Consider pitching in and helping build the Rust you want if you know how, or else consider learning how to do so if you don't.
I do happen to agree that package manager support should probably not be a priority at this stage. I think all available manpower and funds should be used to get the language to 1.0 as fast as possible. People won't put up with massive amounts of breaking changes forever.
> Every language which needs a "package manager" tends to make building standalone applications painful.
Huh? Python has a (sort of) package manager and you can still make a standalone application by bundling every dependency. You are very certainly mistaking the OS-level package manager (used primarily for installing applications) from the language-level package manager (used primarily for building applications). Once you have built a standalone binary, the job of the language-level package manager is done.
> However, C++ is a platform agnostic language, while Rust thus far has only supported Linux as a first class platform. I find the attitude of the Rust devs ("we will improve Windows support once the language is stable") deeply misguided. It meant that they largely missed out on feedback from Windows developers during the development of the language. "Windows developers" also means all the AAA PC(+console) game developers (the most diehard C++ users). Linux still is not a relevant platform there.
I regularly use Rust (nightly) in Windows. Windows support is problematic since it differs from every other major platform, but it did not have a serious problem that seriously discourages the Windows platform. (Not to mention that I regularly build a standalone binary for Windows!) I admit it is not in the best state, but it should be improved and is being improved.
> And the Rust developers seem to continue to go down that rabbit hole by enlisting Ruby developers to develop a "package manager". As a Windows guy the very word makes me cringe. How is that thing going to integrate with Visual Studio and other Windows specific concerns? Ruby only really works on Linux, first advice you get as a Windows guy wanting to learn Ruby is "Install Linux, if you try to do Ruby development on Windows you are in for a world of pain". I do not trust any Ruby developer to write portable software, they are married to the GNU/Linux ecosystem. Would you expect Microsoft guys to develop something which actually works well on Linux?
As a multiplatform guy the introduction of package manager actually marks the divorce with the strong Unix assumption, since Rust has actively avoided the platform-dependent tools so much that it now does not require the C compiler at all (!) and the only remaining platform-dependent tool was the Make. This is now about to change.
> Also remember that C++ does not have a "package manager". Some of the most complex and massive applications in the world are written in C++, yet you do not hear many C++ developers crying "When will we finally get a package manager?". It is not even on the agenda. C does not have one either.
That's why SQLite ships with an amalgamated source code, a sort of unfortunate thing. Dependency problem is so bad in C/C++ that most developers gave up and grew the half-baked solutions, which have another share of problems.
>Huh? Python has a (sort of) package manager and you can still make a standalone application
You can (try) yes. Have you tried it? I have. It was a horrible experience. Support for making standalone binaries is an afterthought in the Python world. It is usually not a simple process that "just works" (on Windows in particular).
>I regularly use Rust (nightly) in Windows
I know a guy who managed to write a game in Ruby and somehow turned the code into standalone Windows binaries. I know that Rust can be used on Windows, that you can use it to build standalone binaries, that is not the point. The point is how well, idiomatically, and smoothly that works.
>Windows support is problematic since it differs from every other major platform
If you do not consider consoles or embedded systems major platforms that is. Most of the world is not POSIX.
>does not require the C compiler at all
The last time I tried to use Rust on Windows it required a particular version of MinGW.
>Dependency problem is so bad in C/C++
I have used C and C++ for a long time, and I cannot say that dependency hell was ever a problem. One of the many things wrong with the "package manager" paradigm is that it encourages people to write software which is a tangled web of dependencies. Software should have few third-party dependencies, that makes maintenance and porting a lot easier. Witness the recent Python 2 -> 3 drama. The primary reason why people did not, could not upgrade to Python 3 was because they had to wait for all the dependencies of their apps to be ported. Many are still waiting.
> I have used C and C++ for a long time, and I cannot say that dependency hell was ever a problem. One of the many things wrong with the "package manager" paradigm is that it encourages people to write software which is a tangled web of dependencies. Software should have few third-party dependencies, that makes maintenance and porting a lot easier.
I think that's a terrific case of "that's my use case and therefore it should be everybody else's too". Try writing a complex web business application with "few third party dependencies" while not reinventing the wheel at the same time. There are many cases where having many third-party dependencies is necessary and the right thing to do, because your customer is emphatically not paying you for writing a web framework, a REST routing layer, an ORM, a full-text indexing system, a logging system and an AMQP client. Attempting to sweep the problem under the carpet by saying "just don't have dependencies" is silly. The only thing to do is to design a dependency system that works.
You write much software like that in C or C++? The sector you mentioned is dominated by other languages (Ruby, JavaScript, Java) for a reason.
You are right that my statement was too generalized, though. Yes, there are sectors where the "gluing together lots of libraries" style is the most practical one and "few third party dependencies" is not practical. However, there are already languages for that, and nobody is waiting for Rust to replace Ruby.
After starting to write a small llvm-based language in C++ (before going back to Haskell), I had already dependencies on LLVM, xdg-basedir, log4cxx and doxygen in my build system. I was using the system libraries, and thinking all the time that it was good this was only a side project, because it was really a terrible way of managing dependencies.
As for Rust replacing Ruby, why not? It is a general-purpose language, it eschews a lot of the C/C++ verbosity, lack of memory safety and weak typing which make them such a pain. I could see it easily as a strong contender in the web platform department once it matures.
There are a bunch of us Rubyists in Rust-land, actually. My "Rust for Rubyists" was one of the biggest early community tutorials. And a Rust extension to Ruby in Tilde's Skylight product is the third production deployment of Rust ever.
Rust will not exactly 'replace' Ruby for me for a while, but I can see a day where that's true.
> The sector you mentioned is dominated by other languages (Ruby, JavaScript, Java) for a reason.
Presumably there's a significant amount of software that both is library-heavy and would benefit from being faster-than-Java, enough so to make some extra memory management worthwhile? If the reason that such software is getting written in Java etc. instead is because dependency management is too unpleasant in C++, well then that right there is the case for something C++-like but with good dependency management.
> Have you tried it? I have. It was a horrible experience. Support for making standalone binaries is an afterthought in the Python world. It is usually not a simple process that "just works" (on Windows in particular).> I know a guy who managed to write a game in Ruby and somehow turned the code into standalone Windows binaries. I know that Rust can be used on Windows, that you can use it to build standalone binaries, that is not the point. The point is how well, idiomatically, and smoothly that works.
I've done it many times, and it is not quite bad to do that (it can even give a single executable file at some initial slowdown). In fact, it is so impressive that the non-compiled implementation can make a working and portable executable file at all. Rust is primarily a (ahead-of-time) compiled language and it would take a single command `rustc foo.rs` (or possibly `rustc -L<deps> foo.rs`?) to produce an executable file `foo.exe`. [1]
[1] Assuming you haven't used 0.9 and later versions, it now produces a statically linked binary by default. No `libblahblabla.dll` around.
> If you do not consider consoles or embedded systems major platforms that is. Most of the world is not POSIX.
It really depends on the definition (and I'm not saying POSIX is not the most widely available platform), but mobiles alone (approx. 300 million units sold) give a significant portion of POSIX systems.
> The last time I tried to use Rust on Windows it required a particular version of MinGW.
You haven't used 0.9, right? That restriction was temporary and it has now lifted. I think you still need MinGW for linkers right now though.
> I have used C and C++ for a long time, and I cannot say that dependency hell was ever a problem. One of the many things wrong with the "package manager" paradigm is that it encourages people to write software which is a tangled web of dependencies. Software should have few third-party dependencies, that makes maintenance and porting a lot easier. Witness the recent Python 2 -> 3 drama. The primary reason why people did not, could not upgrade to Python 3 was because they had to wait for all the dependencies of their apps to be ported. Many are still waiting.
Python 3 is a bad choice in my opinion as it hasn't learned from the precedent of Perl 6 "failure" (this was finally resolved when Perl 5 development has been restarted). But that does not justify the C/C++'s lack of package system: there are lots of old and faulty C/C++ codes around due to the lack of recent C99 or C11 or C++11 compilers at the disposal even without the package system.
> Python 3 is a bad choice in my opinion as it hasn't learned from the precedent of Perl 6 "failure" (this was finally resolved when Perl 5 development has been restarted).
I think they suffer from opposite issues. Perl 6's problem is that it was way too ambitious for the manpower behind it, and it turned into the Duke Nukem Forever of programming languages. The issue with Python 3 was that it introduced breaking changes, but was widely felt not to bring enough to the table to warrant these changes.
Dividing the language is bad in general, since it essentially divides the ecosystem behind them. In the case of Perl 6 the ecosystem couldn't control the ambitious development goal; in the case of Python 3 the ecosystem could control that but could not realize that the smooth transition plan is still a transition. There are less problematic but similar causes in other languages, PHP 4 vs. PHP 5 vs. PHP 5.3 (oops!), D 1.0 vs. D 2.0 and so on.
If you're doing static linking, it seems extremely trivial to get from "have a package manager" to "have a compiled binary that stands alone".
Pull the source down, compile/link it and you're good. App A and B can run with different versions of libfoo, because they're statically linked so you're good to go if slightly wasteful of code footprint. If you're expecting to do runtime dependency lookup by dynamically fetching things through the package manager, then I agree, you're gonna have a bad time.
"One of the many things wrong with the "package manager"
paradigm is that it encourages people to write software
which is a tangled web of dependencies. Software should
have few third-party dependencies, that makes maintenance
and porting a lot easier."
Or it means that you can share the fruits of your porting with others.
If you and I work on a project that depends on module foo, only one of us needs to port foo to platform bar once for both of us (and everyone else) to gain compatibility with platform bar with respect to our dependencies.
What you want instead is a way to have package managers support defining cross platform modules so that bar doesn't have to be forked and published as an entirely separate module.
"The primary reason why people did not, could not upgrade to
Python 3 was because they had to wait for all the dependencies
of their apps to be ported. Many are still waiting."
Then the problem is a lack of tooling to aid with porting the code to the new platform. Creating a tool that parses code into an AST and analyzes the code to help define how it can be made compatible with the future version of a language should be a tractable problem for 1-2 developers.
At the end of the day package managers are first and foremost social software whose role is to aid meatware in achieving eventual consistency. We've only begun to scratch the surface of what is possible in this respect.
On the flipside the lack of a standard package manager and various problems with libraries in c and c++ cause people to re-invent the wheel all too frequently.
"If you do not consider consoles or embedded systems major
platforms that is. Most of the world is not POSIX."
Would you even use a package manager on an embedded system? I would imagine that a package manager is far more appropriate for development systems and systems capable of being used for development. An embedded system would probably use a much simpler install system like what is used in TinyCore linux, if not even simpler than that. In fact, I would expect the package management on an embedded system to be driven by another system, which may be POSIX compliant. That software that puts code on that embedded system or console could be an extension to an existing package manager.
The biggest issue with most package managers AFAICT is that most aren't designed first as a platform with a programmatic interface. At the end of the day, the CLI for a package manager should be just one of the clients driving the package manager.
A package manager like this is intended for development time. Since you don't actually do your development on the embedded system, you can still use it fine.
You'd generate some build artifact with the help of Cargo, and that's what you'd load onto your device.
The python comparison isn't really good here. Rust is a compiled language and usually statically links in stuff (which makes total sense for libraries that are neither system libraries nor handled by a package manager, when you only deliver one binary)
Software has dependencies. If Rust's attitude were "software shouldn't have dependencies", it'd simply be trying to wish away a problem. Rust should solve the dependency problem instead of pretending it doesn't exist.
You said that they are trying to eliminate the dependency on make. Please please please tell me that they are considering something designed like tup instead:
Given how long it takes to build rust, I would hope a faster build system is a priority. Running `brew install rust` is bar none the longest build process I've seen in the brew ecosystem.
That's not a problem of the build system, but rather a fully bootstrapping compiler (needs to compile itself 3 times), that's doing certain things in rather non-optimal ways (e.g. passing more code than necessary to the optimiser (LLVM)).
> I do think C++ is in dire need of being replaced, it is unsafe at any speed and full of legacy cruft.
Lack of decent package management is part of that cruft. As someone else pointed out, you're definitely confusing this with system package management. A good package manager for Rust could make it much much easier to have good Windows support, because it could Windows-specific compilation and packaging problems in one place, rather than relying on each individual library author to solve them. There is no reason it couldn't interact nicely with Visual Studio, unless that is not something Visual Studio is interested in doing.
This could be a really good thing for your apparent interest in Rust, but you're looking at it through the narrow lens of your past frustrations, rather than thinking about and advocating for ways it could alleviate them.
I hate package managers too, but the worst I've ever seen were proprietary "solutions" in the games industry which did do awful tricks to make VS play ball.
It is no exaggeration to say people would avoid updating packages in their projects (i.e. the whole point of using a package manager was nullified) since they knew they would lose days or weeks merely getting back to where they were.
The problems with package management seem to be exponentially proportional to how clever the person implementing the system thought they were. Any more than a declarative list of dependencies per package that can be fully evaluated to a programmer readable dependency tree prior to a single package update and you've created a self sustaining beast that will consume more of your time than developing the actual product.
I personally have much the same reaction to language-specific package managers as you do. But my question is a little different: how is the Rust package manager going to interact with the Linux distribution's package manager?
If the answer starts with "Well, you build your own installation of the language..." (Hi, Haskell!) I am afraid I may start to cry uncontrollably.
> Also remember that C++ does not have a "package manager"
Many large, truly cross-platform, continuously-integrated C++ applications do - out of necessity - have an (ad-hoc, underspecified) approximation of a package manager. CMake-based projects come to mind immediately with the "build the world" approach using CMake external projects. GYP for chromium is the same.
How do you deal with dependencies in Windows-land? Do they wind up all getting embedded in the Visual Studio project? How do you deal with projects with different build systems? Or are Visual studio projects basically the package management in the ecosystem?
On linux I've basically wound up depending on the distribution packages, but this varies from distro to distro. The other option is embedding a build of all dependencies into the project, but this is slow, painful, and prone to error. Having a neutral package manager for C and C++ would be amazingly useful for this (though it would have actually be used. We use some python packages but some of them don't work when installed via pip so it's back to depending on the distro package).
> I am very skeptical about this. Every language which needs a "package manager" tends to make building standalone applications painful.
Rust does not need a package manager. We've used it for a long time without one. If the package-manager-free workflow with manual installation (or through an IDE, etc.) is to your liking, use it!
> However, C++ is a platform agnostic language, while Rust thus far has only supported Linux as a first class platform.
Totally false. I'm the earliest developer still full-time on the Rust language (4+ years) and I was exclusively using Mac OS X from the beginning. If you look at Servo, Mac is actually better supported than Linux at the moment.
Graydon was adamant that Windows must be supported from the start, and we have not changed that.
> I find the attitude of the Rust devs ("we will improve Windows support once the language is stable") deeply misguided. It meant that they largely missed out on feedback from Windows developers during the development of the language.
What feedback is that? Most of the language-level concerns are platform-independent, except for COM, etc (which we are familiar with).
Furthermore, we have Windows support, and it's not that far behind the other platforms. Later on in your comments, you talk about a binary installer. That's not something we have on any platform yet.
> And the Rust developers seem to continue to go down that rabbit hole by enlisting Ruby developers to develop a "package manager". As a Windows guy the very word makes me cringe. How is that thing going to integrate with Visual Studio and other Windows specific concerns?
You don't need to use it if you're using Visual Studio. If you like your tools, keep them! That's Rust's attitude.
> I do not trust any Ruby developer to write portable software, they are married to the GNU/Linux ecosystem. Would you expect Microsoft guys to develop something which actually works well on Linux?
Of course I would, if they were competent developers.
> Also remember that C++ does not have a "package manager". Some of the most complex and massive applications in the world are written in C++, yet you do not hear many C++ developers crying "When will we finally get a package manager?". It is not even on the agenda. C does not have one either.
There are tons of popular C++ package managers: Homebrew and apt-get, to name a couple.
> A package manager may belong to a Linux development environment for Rust, but the language itself and its library handling should be completely independent of it.
That is precisely what Rust does.
> Rust libraries should work just like C++ libraries so that they do integrate well with other development environments.
That is also precisely what Rust does.
> I see Rust becoming a new OCaml, utterly Linux-centric and thus leaving C++ as the sole competitor in the maximal performance + high-level abstractions + platform agnostic category.
This is really silly and hyperbolic, and somewhat insulting to those of us who have spent a lot of time making sure the design works well on Windows. Like I said, I've been working full-time on this project longer than anyone else and I exclusively use a Mac.
Mozilla is not going to invest in a language that won't work on Windows, for obvious reasons. We have a lot of Windows developers here too, you know.
First-class support for Windows is evident everywhere in the design, from the use of libuv instead of libevent for the green threading to the lack of exposure of `select` or `fork` due to that not performing well on Windows.
> For the sake of games no longer crashing randomly because of memory corruption bugs: change course now.
To what? Abandoning the package manager? That'd be alienating a large segment of users who want a tool to create and share libraries, and deploy binaries onto servers for the sake of unhappy Windows developers who dislike that workflow. Instead I think we should do what we're doing now: invest in the package manager, but allow the tools and libraries to be totally independent of it, so that you can use Visual Studio or whatever you'd like with Rust.
>I'm the earliest developer still full-time on the Rust language (4+ years) and I was exclusively using Mac OS X from the beginning. If you look at Servo, Mac is actually better supported than Linux at the moment.
You mean XCode and the rest of Apple's developer tools have great Rust support? There are up to date Cocoa bindings?
That would be real Mac support.
I know that many Linux people switched to Mac OS X because it has this whole POSIX system underneath its chrome, which normal Mac users never touch, but which makes an adequate Linux replacement + you get the nice Mac desktop OS on top of that. But as I said I do not see how running ports of Linux tools on that layer qualifies as real Mac support.
>What feedback is that?
Different types of software get developed for Windows, trying to use Rust in these different scenarios might have exposed issues specific to said use cases. Demanding PC games are overwhelmingly developed on and for Windows for starters. And there is also the issue of "cultural incest". There are certain hegemonic attitudes among UNIX developers about how thinks should work, how they should be organized, named etc. Attitudes which are often not shared by other developer communities. However, the early development of C++ happened in a UNIX dominated environment too so maybe this will not be that problematic.
>you talk about a binary installer. That's not something we have on any platform yet.
The thing is, just dropping source tarballs on people is considered acceptable among UNIX developers, that is the way you do things there. But it is not the way you do things on Windows.
>Of course I would, if they were competent developers.
The issue is not technical competence, but "cultural competence" if you will. Building software people on platform X actually want to use, which requires respecting the cultural norms. You just gave a nice example of lacking cultural competence by assuming that not providing a binary installer which does not require manual twiddling is just as acceptable on Windows as on Linux, while it really sends a bad message on Windows and leads to Windows developers never bothering with Rust at all. It is like, nobody on Windows has a problem with closed source drivers .. but the Linux devs hate those.. bad culture fit. It matters, a lot.
>There are tons of popular C++ package managers: Homebrew and apt-get, to name a couple.
apt-get is the Debian package manager, Homebrew is for UNIX devs using the POSIX layer of OS X, neither is a "C++ package manager". These pieces of software are OS specific tools and have no direct relation to the C++ programming language at all.
>That is also precisely what Rust does.
Well, that is great. However, I remain highly skeptical. I can already see the install instruction for Rust libraries on Github etc. "Cargo install foobar", no word about how to build the library without Cargo, the dependencies are only specified in Cargo metadata files etc. Plus as I said package managers encourage developers to write software with lots of dependencies .. which becomes a pain to build without said package manager. While in theory Cargo may be optional, I doubt it will be in practice. The whole Rust ecosystem will grow based on the assumption that everyone uses Cargo, and everyone who does not will be in for a world of pain. Other languages who have official package managers show this.
>Mozilla is not going to invest in a language that won't work on Windows, for obvious reasons.
There is a big difference between being able to turn your particular code base into Windows binaries and actually providing a good development experience on Windows. The first does not necessarily require the second. I think most (all?) Mozilla devs use a UNIX-style development environment, even if Windows is the dominant target platform.
>First-class support for Windows is evident everywhere in the design, from the use of libuv instead of libevent for the green threading to the lack of exposure of `select` or `fork` due to that not performing well on Windows.
Again, I was mostly talking about the developer experience, not about the ability to somehow build fast binaries. That is expected too of course, but not enough for the "first class" label in my book.
>To what? Abandoning the package manager?
Yes. See above for the reasons why. When I build my C++ software on Linux I do not need a special "C++ package manager". I just apt-get the dependencies (e.g. SDL). Does Rust really need something more? As I said, I think a language specific package manager will undermine Rust's ability to be a general C++ replacement.
>and somewhat insulting
I did not mean to insult anyone. I want Rust to succeed, that is my only agenda here. I think you are on the wrong track with this and thus I wanted to warn you about it. Maybe I am completely wrong, we will see. I will not bother you again, because as a Windows game developer Rust is dead to me now.
Maybe I will look at it again once I read about the first major game written in Rust being released, which will probably be somewhere between 5 years from now and never. But right now my money is on never.
> You mean XCode and the rest of Apple's developer tools have great Rust support? There are up to date Cocoa bindings? That would be real Mac support.
Yes! In fact, I've given talks on using Xcode's Instruments.app with Rust! You get full support for Rust with it. There are also actual up-to-date Cocoa bindings, which Servo uses.
I have no idea where you have gotten the idea that Rust cares only about Linux.
> You just gave a nice example of lacking cultural competence by assuming that not providing a binary installer which does not require manual twiddling is just as acceptable on Windows as on Linux, while it really sends a bad message on Windows and leads to Windows developers never bothering with Rust at all. It is like, nobody on Windows has a problem with closed source drivers .. but the Linux devs hate those.. bad culture fit. It matters, a lot.
We aren't done with the language yet. There is no reason why we can't ship a binary installer.
> Well, that is great. However, I remain highly skeptical. I can already see the install instruction for Rust libraries on Github etc. "Cargo install foobar", no word about how to build the library without Cargo, the dependencies are only specified in Cargo metadata files etc.
Rust has the ability to build libraries without cargo. You can do it today. Cargo is a package manager, not a full-fledged build system.
When it comes to dependencies, note that Rust specifies dependency information in the crate itself, so it is independent of cargo. This is by design.
> Plus as I said package managers encourage developers to write software with lots of dependencies .. which becomes a pain to build without said package manager.
Software has dependencies. Our answer to dependencies cannot be "don't have dependencies", as you think it should be. That is trying to wish away a problem instead of solving it, and would be far worse for language adoption. Maybe as a Windows game developer that may be a scalable solution, but for server software (HTTP, routing), browsers (images, 2D rendering, font stuff), scientific software (LAPACK), it just isn't realistic.
> While in theory Cargo may be optional, I doubt it will be in practice. The whole Rust ecosystem will grow based on the assumption that everyone uses Cargo, and everyone who does not will be in for a world of pain. Other languages who have official package managers show this.
Most languages with package managers (really, most languages period) are languages that run on a VM and don't produce native binaries, so you're probably running them from the command line or using a language-specific IDE to begin with. Because of that, installing packages from the package manager is scarcely more effort than using the language to begin with, and so everyone uses the package manager. In a language in which a large portion of people are using Visual Studio and using the language package manager was too much of a hassle, I suspect library developers would not hardwire cargo into their libraries.
(Not that I think it's really possible to "hardwire cargo" or make a "world of pain" for users who don't use it. All cargo does is automate finding and installing libraries. You can always do it manually. cargo is an independent tool that does nothing but read metadata and invoke the Rust compiler; it is not integrated into the toolchain.)
> Yes. See above for the reasons why. When I build my C++ software on Linux I do not need a special "C++ package manager". I just apt-get the dependencies (e.g. SDL). Does Rust really need something more? As I said, I think a language specific package manager will undermine Rust's ability to be a general C++ replacement.
We can't rely on OS package managers to package Rust for a good developer experience.
* First of all, it's a chicken and egg problem: the OS package managers are slow to update, and they don't have much of a motivation to update without Rust uptake. But without a good package management system, Rust will have a hard time getting that uptake to begin with. Look at the situation now: Rust has been around for a while and OS package managers have little support for it. We have to take the situation into our own hands.
* Some OS package managers (most notably Homebrew) do not like language-specific packages as a matter of policy, other than for C and C++; they prefer that languages have their own package managers.
* Many developers prefer faster development than OS package managers allow for. We live in the age of GitHub, and the dead simple "push your libraries to a server for the world to use" has been extremely important for communities like Node.
* For servers, it's very important that your production machine be able to replicate the development environment precisely. This is why you need a system that can tie your software to the exact versions of your dependencies.
> I did not mean to insult anyone. I want Rust to succeed, that is my only agenda here. I think you are on the wrong track with this and thus I wanted to warn you about it. Maybe I am completely wrong, we will see. I will not bother you again, because as a Windows game developer Rust is dead to me now.
Because we have a feature that you don't have to use to use the language or libraries, Rust is dead to you? I have no idea how that makes any logical sense. Having a Rust package manager does not preclude good Visual Studio support.
> Maybe I will look at it again once I read about the first major game written in Rust being released, which will probably be somewhere between 5 years from now and never. But right now my money is on never.
We are not just targeting Windows game developers. If having features that you aren't going to use and don't have to use is going to be a blocker for adopting it, I think that's irrational, but I can't do anything about it.
So far most if not all of the games in Rust are using SDL as a layer over Direct3D, because they don't want to be Windows-only. That's a dependency right there. Fetching and installing the latest version of rust-sdl is a real pain, and having a way to automate that can only help adoption.
That post reminds me of a farm field: so many straw men.
I appreciate you perfecting it with accusing me of being illogical after constructing all of them. Your post does not deserve a detailed reply.
I'm all for Rust package system. But Rust isn't even stable or sort of stable yet? What is the point of working on a package manager when they don't even know what vectors or the extern crate/mod system will look like in 3 months?
A package manager will be a forcing function for stability. The Rust team wants to ship Rust 1.0 sometime late this year (I think? Correct me if I'm wrong)
Building a package manager is not a one-month project, and getting it done in parallel with the stabilization of the core means that there will be a full-stack system that people can use and help iterate on as things stabilize, and that will be ready to go once Rust itself is ready for mass consumption.
We do know what those two things will look like. The design is fairly ironed out at the moment, just not fully implemented. For vectors (dynamically-sized types), there's even a PLT Redex model to verify that the design holds together…
Package managers make everything way more complex than they are supposed to be. I see package managers as a component of an operating system. The proper way of packaging software source is having it contained in a directory tree with one, public build script (a Makefile, a shell script...) at its root. One or many build artefacts are generated, which at user's disposal.
With language specific build systems, this process gets more complicated. They disallow the user to arrange their source tree the way they want, customise the build process, and make the whole thing as convenient as running make, or e.g. ./build.sh. Provided a conventional compiler command (e.g. the cc interface), it is easy to create a Makefile that exploits it.
I have had a lot of confusion with Go compiler, when I tried to build and use a checked-in, external package. Python's pip is quite complicated. Cabal, can easily be replaced with a bunch of Makefiles. Binary packages can be supplied as [tar/zip] archives, and users would eventually package them for their OS. Also, most these package managers are exploited for installing applications, which is problematic.
I have not used Rust, but I will try it out. The package manager, though, is not just a bad idea, but also an inconvenience.
I don't know much about Rust, but AFAIK it's mostly static linking like Go. So 2 Rust applications won't have to share anything, so developments packages do not have to be converted into deb packages. Unlike ruby or python packages.
I have one program here using libstd-3e5aeb83-0.9.so, libgreen-83b1c0e5-0.9.so, librustuv-2ba3695a-0.9.so, libcombinations-6b2260d9-1.0.so, and libbisect-441e5ec3-1.0.so (the last two are local libraries). And I have another that doesn't, according to ldd. Both compiled with the same flags.
a) no to cargo exec
b) If you have a better solution for bundler, please open an issue and suggest it.
c) There are few simple solutions to avoid `bundle exec`, which is mostly there to make things easier when getting started. I'm sure you spent a few moments to get to know your tools, so you must already be aware of these.
I have included my phone number on virtually every piece of public email (to mailing lists) I have ever written. I have found that people do not abuse it.
That's pretty amazing about the phone number honestly. I meant it only in jest. Having sat in a room with you at Pivotal and hear you talk about your work on Bundler was very cool and I appreciate your work. I know you take feedback seriously so I guess its not that surprising.
For Rust, this is good. I'm sure this is very good for Rust.
Take the rest of my message with a grain of salt. It's 6:30am and I'm working through my first cup of coffee still. I'm not intentionally trying to be a grumpy old man.
For the rest of us, I'm concerned that yet-another-package-manager (YAPM? Yap-meager?) will just continue to fracture the library ecosystem.
Why can't something like APT handle it? NPM doesn't work the same as PIP, doesn't work the same as Nuget, doesn't work the same as Gem, other than the most basic install functionality. Packaging libraries for distribution is different for each, and if you have a problem, tearing into the system to figure out where the failure occurred is different for each.
Maybe it's because, through hard-fought experience, I've finally learned how to manage .NET dependencies without too much headache. Don't ever even think of trying to use the GAC. Don't let your developers install the dependencies on their own. Just make a directory full of DLLs in your project root and use relative paths to load them. It's the only way I've been able to get developers up and running with a project in Visual Studio as soon as they clone the repository. Even Nuget gave me issues (though granted, I gave up on it so fast I don't remember what they were, other than telling the jr. dev to just dump the DLL in the libs directory already and get to work on the issue list).
Attempting to isolate your library ecosystem from the file system feels like it encourages "reinventing the wheel" at the language level for libraries that will mostly be the same across platforms. Do we really need to figure out how to do database connections in yet another language? Why are there 15 different syntaxes for positional arguments in strings?
I'm just getting a little... weary... of starting to learn a new programming and spending the next few hours just figuring out that they've renamed printf to writeln and %s to {0} for no good reason, or that there are no database connectors yet, or if there are they only implement a strange subset of databases. It almost feels like language devs have gone out of their way to be superficially different from everyone else, without being substantially different.
Given that most languages have support for some kind of foreign function interface, especially with the C ABI, it seems like we have the tools available to us to start building cross-platform libraries and distribute them regardless of consuming language.
I've been wanting to dabble in Rust for a little while, but damn, yet another package manager to learn, yet another notion of what a library ecosystem should look like, just doesn't excite me right now. It's time I will have to spend to get in the door, and I am not even sure right now if I will want to stick around.
But I suppose with an initiative like Rust, isolation is probably the correct ideology, considering it's about security/performance before productivity.
Anyway, complaining done. No hate for Rust-team's work. Just kind of yearning for a probably unobtainable utopian future :/
In the end, Rust will always just produce the same types of binary artifacts that C++ does. Currently people use Make and friends to build Rust, and that will always remain possible. Cargo is just an attempt to simplify the versioning, updating, and dependency-resolution stories in a platform-independent way.
I really like Go, but the lack of a solid package management solution that I like using is a downer. Before you rage-comment: I know how Go packages work. I know you think they're better than anything that's ever been invented. I know there are solutions out there for some portions of what I want. But nothing has been created that really works for me (yet). So, to see this development in another one of my "tinker with it but not build a ton of production quality code just yet" languages is exciting!