- uno depends on json 1.3.6
- dos depends on json 1.4.12
- tres depends on json 2.1.0
Cargo will use json 1.4.12 for uno and dos, and json 2.1.0 for tres.
Hopefully rust builds a culture that respects semantic versioning better than the Ruby & Node cultures do. That has to start at the top. There were several Rails 2.3.X releases with minor ABI incompatibilities. Truly respecting semver would have required these patch level updates to get a new major number.
Note that this is a sliiiightly modified SemVer: in SemVer, these three things would conflict. It would be strict SemVer if uno depended on ~>1.3, and dos on ~>1.4.
We hypothesize that this difference works better for AOT compiled languages. And since it's pre-1.0, it's all good. This is the 'we'll watch this closely and adjust' from the email: it might not be okay.
When you declare an x.y.z dependency, tools that use SemVer (like Bundler, which is the biggest influence on Cargo) assume x.y.(>z). So, a 1.4.12 dependency says "Anything greater than 1.4.12, but less than 1.5.0."
When you declare an x.y dependency, tools that use SemVer assumes x.(>y). So, a 1.4 dependency says "Anything greater than 1.4.anything, but less than 2.anything.anything".
This is assuming the ~> operator, if it was =, it would ONLY be 1.14.2, which is even more restrictive.
The project _should_ work, which is why Cargo is using this modified version of ~>.
What would be fantastic is if a Cargo repo could refuse a package which doesn't follow follow this policy (try to upload a new minor version with an incompatible ABI, get 400 Bad Request as a response). Of course, it won't save you from a change in semantics, but it would already be a good step forward.
It would be impossible to do so perfectly for a turing complete language. You could get close by running the previous minor version's test suite against the new update, but that would essentially make the package repository into a continuous integration server, which is expensive to maintain. There's another, less perfect way of detecting API breakage, which is to use the rustdoc tool to export the projects documentation as json. This would let cargo detect whether items had been added, removed or renamed, which covers a large number of cases in which API compatibility is broken. If it encourages developers to up the version number rather than fix the flagged incompatibilities, then it will also drastically reduce the number of incompatibilities in the package repository that can't be detected by rustdoc. While it's impossible to completely eliminate the human element in upholding the version number contract, software can limit the number of errors in the system.
How hard would it be to get usable type signatures out of the compiler as well? It seems like diffing two sets of type signatures would capture everything but purely semantic api incompatibility. Also seems like "here's a machine-readable version of all the types in this package" is a useful thing for a compiler toolchain to produce.
Because, of course, Mozilla is paying you to sit around on your hands.
Requiring a version number to be within the semver format is one thing; but abiding by the rules of semver is another thing entirely. You'll see plenty of Node packages break compatibility on minor or patch updates. It's not something I see as extremely important in that community.
Will this play nicely with the package management tools OSes already have, or is this going to end up being yet another source for files/packages to accumulate that are outside the view of well-documented and designed administrative tools?
Which is why we have testing and staging environments to make these changes on first. Conversely, when everything is pulling in their own version of a library, you run into situations like the zlib buffer overflow, where you had huge numbers of programs that needed to be rebuilt because no one used system-packaged libraries. Obsolete and vulnerable software is a liability, and not having tools which can readily tell what software needs to be updated makes quite a few peoples' jobs quite a bit harder.
Either you rely on system librarys OR every binary / library pulls in its own copy of its dependencies (npm model).
The latter lets you have multiple versions of the same library for different dependencies, without confusing your system package manager.
The former might mean 'less build time', but it's pretty much whats wrong with the C/C++ ecosystem; you can't track the system libraries from the package tool, and you get out of sync.
Pillow and freetype specifically pop to mind as an irritatingly broken recent example; when freetype upgrades, perfectly working python projects suddenly stop working because the base system freetype library has been upgraded; and the pinned python libraries that depended on the previous freetype version no longer work, because they rely on the system package manager not to do stupid things.
It would be nice if you could point cargo as a 'local binary cache' to speed things up, and make them work even if the central repository goes down; and that could be package manager friendly, I imagine.
I'd say look at the BSD ports system for an example of a system done well, and is extensible to such issues. If your app depends on projects with irresponsible developers that make unannounced API/ABI changes and the like, the general format of the ports system, and tools for maintaining private repositories like portshaker and poudriere, make it easy to create ports that let you have the best of both worlds. To add to this, one can use metapackages and the like which don't have any files in and of themselves, but will point to the latest version. So, for example, you create a port named libbar, which is always going to get you the latest and greatest, but you also have libbar-1, libbar-12, etc, that you can use when you need an explicit dependency. Additionally, there are ports of the system library -- heimdal, openssl, etc, so you can have a stable base system, but still have more recent versions around for your application. Most of the issues people have with packaging these projects seems to be much more based around inexperience with good packaging tools and practices, rather than the idea of system packaging in and of itself.
I think it would be just as easy to package rust programs using OS native package management as anything else, and I'm sure packages will be made for popular things for common package managers. But OS native package managers are a royal pain to use when actively developing on a project that has some dependencies, and bespoke Makefiles are a very imperfect solution.
Flipping your question around a bit: will package manager creators and maintainers ever develop better solutions to the use case of development rather than system administration so that we don't have to keep creating these language-specific tools?
They already have automated Haskell packages from cabal, it shouldn't be hard to integrate cargo with nix once it's more stable.
You wouldn't even require upstream NixOS packages, just place built cargo packages in the nix store, using it like a cache.
Then upstream NixOS channels could start accumulating cargo packages, making cargo dependency "builds" faster.
Yeah, nix does look pretty awesome and aware of (even actively designed for )both the system administration and development use cases. If it becomes the native package manager for some popular operating systems, I may have to eat my words about language-specific solutions.
Have you taken a look any at the FreeBSD ports system? It uses makefiles to manage pretty much any software install you can think of. Additionally, there's a pretty good infrastructure there for managing custom build trees, etc. Additionally, they also have things like BSDpan that let you use arbitrary Perl modules with the package management system.
I've used ports, but I've never developed a project using it, so maybe I should try that out. But "uses makefiles" does not make me particularly optimistic for its pleasantness. I really prefer (and think it's been proven possible to build) declarative, rather than imperative, systems for managing dependencies.
Until the various OS package managers stop pretending they are a universe unto themselves, this problem will continue. Languages have to distribute libraries across all of their host systems, so focusing on support for any particular package manager (or small subset thereof) doesn't buy much.
Hopefully the cargo team can come up with a solution that works a little better here, but I wouldn't hold my breath.
Each language having its own tools is fine and I doubt OSes are against that.
Those tools just need to support a very specific set of features or scenarios to make packaging easy:
Understand that enterprise OSes are not built on developer Macbooks. Enterprise distros have reproducible builds on automated systems with chroots that contain only what the package needs, no network access and sometimes the build happens inside a virtual machine. Its is almost ridiculous how after maven forgot the topic of being "offline" almost every tool released afterward has done the same mistake.
Understand that Linux distributions sell support and that means being able to rebuild a patched version of a failed package. So whatever dependencies are pinned in Cargo.toml or Gemfile is irrelevant. The OS maker will override them as a result of testing, patching or simply to share one dependency across multiple packages. Distros can't afford having one package per git revision used on a popular distro and then afford to fix the same security issue in all of them.
So having "cargo build" be able to override dependencies and instead look on the installed libraries with a command line switch or env variable helps the packager not having to rewrite the Cargo.toml in the package recipe.
Maven was probably the first tool that made packaging almost impossible and completely ignored the use-case of patching a component of the chain and be able to rebuild it completely.
Semantic versioning is great news, because it allows you to replace the dependencies for a common semantic version (not exactly the same).
For integrating with the C libraries not much needs to be done. If you support pkg-config then you cover 85% of the cases.
I had the impression this fills the same role as bower, mpm (non global) dependency or plain custom vendor dirs handled with your favourite git module paraphernalia: your build is the place your dependencies live (+ possible cache directory somewhere else).
And, these you use this packaging system to manage your build, not the deployment of your (iirc statically linked) build artifacts.
While technically the cache directory is a place where files can accumulate outside the view of a well documented and designed administrative tools, this is common problem shared with many tools including your favourite browser.
I haven't been following the development of this package manager, but previous attempts at making a package manager for Rust have failed. Is this package manager supported officially now? I really hope it will stick around.
Yes, the developers are actually domain experts being paid by Mozilla. The release of this website also coincided with the move of the source repository into the rust-lang organisation: https://github.com/rust-lang/cargo .
Yes, it will be supported. The Bundler guys were specifically contracted by Mozilla to sort out the mess. They are actually using Rust in production at the moment, so the have a big stake in its future.
The quiet nature of the development process of Cargo was actually a response to the previous package management failures. The idea was not to publicise heavily before it was ready for dog fooding. This seems to have paid off.
While I support semantic versioning, people need to be aware that it's only as good as the package maintainer. I have used packages that have (unintentionally) broke semver conformity. Nothing really stops an author from releasing breaking changes when going from "1.2.1" to "1.2.2".
This is very welcome news, indeed. I will have to give it a try as soon as I can make the time.
I hope it will be more stable and work better than the Haskell package manager, Cabal. I literally never got that to work on any machine. It would typically destroy itself while attempting to update itself...
I 100% agree with you. It took me like an hour to figure out where I thought it made sense to put rust-nightly, and I'm still not really sure I did it right. But it works and is way better than the morning compilation cronjob I used to use.
I agree, it works, but we need to keep pressing those guys... or contribute. Cask still doesn't have reinstall/upgrade. As far as I know, Homebrew can't upgrade packages with head versions, and, worst pain of all - Homebrew doesn't support Yosemite or any unreleased OS X version. Being a tool for developers primarily, all the above are must-haves!
It'd be really nice if it didn't impose a hard choice of json-ish to use for the manifest, even though that's a rather small thing. Just don't want to be stuck with toml if it doesn't gain wider acceptance than it has so far.
I think mixed is also very important. For example, sometimes you really do only have a binary dependency (e.g., .NET assemblies), while other times, you may have a binary + source dependency (e.g., library + header files).
I don't know about the security vulnerabilities, but it works fine as a config file format (we use it at my company for a lot of in-house stuff). I had a similar reaction to the language. Even if not YAML, why not just use JSON? It's universal, dead simple to use and understand, has extensive libraries in just about any language, etc...
That said it's not that big of a deal. At least it's not an in-house markup like Haskell's cabal...
JSON is not really human-editable. Those quotes and commas, ugh! Also, JSON lacks comments.
The vulnerabilities in YAML (which is a superset of JSON, by the way) point at why YAML and JSON both aren't appropriate for configuration: they are _serialization_ formats. Configuration isn't what they're built for.
And you're right, it's really just not a huge deal in any way. Especially once we have `cargo project` to autogenerate the basics.
Live by eval, die by eval. But more seriously, nobody is forcing a Rust YAML library to support arbitrary structure deserialization (or maybe as an optional switch). I don't think you'd want such a switch on in a build system configuration file.
That's one way to look at it. On the other hand, when a format presents useful, but potentially dangerous characteristics (eg, XML entities expansion), it is entirely sensible to offer a way to not take them into account.
I don't really care what format the cargo files use (that's just an inconsequential bikeshed as far as I'm concerned) but in your example "comment" is really not a comment. I would expect to be able to put free-form comments anywhere I want and have them thrown away by the parser.
Your solution works fine for docstrings, but comments and docstrings are not the same thing (although many languages that don't support docstrings in the syntax hack them together using comments, admittedly).
But beyond that, what's the argument for switching to json? Is there some kind of intercompatibility with npm/Node.js to be gained?
That's not really carte blanche to build everything on unstable technology. The point is things are supposed to be converging on stabilization. And a lot of what's unstable now is under the direct control of Rust. What TOML does or doesn't do is now a matter that needs to be worked out with Tom. It's not confidence-inspiring in the least.
If the plan is to jettison TOML, then it's simply just an odd choice to use for a first cut. And from a purely perception manner, seemingly reaffirms concerns some have had about the bundler team building cargo (right or wrong).
The plan isn't to jettison TOML, but even though it's 'unstable,' it hasn't changed in a very long time. And, given, uh, Tom, I doubt it will very much. He has better things to do these days.
I can see an argument for INI files, but TOML is basically INI with some extensions. And YAML (which has no Rust parser, and nobody who wnats to write one) and JSON (which does have one in the standard library) are very poor configuration formats.
Which one would you have preferred? It seems like a reasonable choice to me.
> And from a purely perception manner, seemingly reaffirms concerns some have had about the bundler team building cargo (right or wrong).
You could just spell out the personnel issues rather than being all FUDdy about it. (I'm friends with Yehuda, but also was room-mates with the current maintainer of Bundler (until very recently). Bundler isn't perfect, but lots of that had to do with people assuming a pre-1.0 project is stable, and upstream bullshit with RubyGems.)
That's a fairly weak argument. If it's not going to change, then maybe some outreach should be done to get him to tag it 1.0. Otherwise you're opening yourself up to the exact same issue you outline as an issue for Bundler -- people assumping a pre-1.0 project is stable. As long as there's the option for someone to say "this is pre 1.0 so I can change it whenever" it's going to cause concern because most of us have been burned by that several times over. A de facto 1.0 release vs an actual 1.0 release.
As for the FUDdiness, there was another extensive comment thread here on HN when Cargo was announced. I didn't feel it necessary to re-enumerate all those concerns. And to the best of my recollection, the problems listed were all technical in nature, not personal. For my part, my list was:
But, I'm hardly alone in being concerned. This is something that will drastically influence the ecosystem. So, it should be met with some level of scrutiny. If it comes out passing muster, great. But that's not going to happen by being glib about things.
Thanks! Since the points are enumerated, I can refute them:
> * You can't override a dependency:
You can, in modern Bundlers. But the transitive dependency issue is inherent to Ruby, not Bundler. Cargo will work like npm in this regard, not like bundler.
> Dependencies can disappear on you:
The central repository isn't built yet, but this is very valid. The reason that `gem yank` is still available to everyone is that you'd be surprised how often people push proprietary stuff up to RubyGems. It was removed for a while and caused a significant burden on the RubyGems team.
Regardless, yes, this is a serious consideration.
> It has weird rules:
Totally fair. Let's learn from that and make Cargo not have weird rules. :)
> * It promotes bad practices:
Yup, that's a Rails thing. Rust should be way better, as it doesn't have this kind of issue.
> * It's slow:
This actually has just as much to do with RubyGems as it does Bundler. See all of the presentations Andre has been doing recently, and the bundler-api project.
> * It was designed for MRI:
Quite fair. We only have one Rust implementation so far, but `cargo exec` won't be necessary.
> So, it should be met with some level of scrutiny.
Absolutely! I don't mean to say "just take this without complaints." But without specific, actionable complaints, it won't get fixed. Please make as many specific, actionable complaints as possible, especially now, pre 1.0.
I think we wound up on the same page, even if a circuitous route. So I don't want to continue beating up on bundler. But I guess I'm going to take one more parting shot and mention that a lot of its execution performance is due to thor as well (at least when I've profiled). Using binstubs rather than "bundle exec" can be a good deal faster.
I feel compelled to mention it because this seems to be a clear case where an aesthetic DSL was chosen over performance and it can't really be fixed without a backwards-incompatible change. Rust and Ruby are two different beasts and I get that. I just hope that performance is a core design consideration for Cargo (I have no idea if it is or if it's just a nice-to-have).
Airing dirty laundry is hardly ever pleasant. I'm certainly not immune to my own set of WTFs. I legitimately just want to make sure Cargo comes out awesome. And I much appreciate Mozilla's commitment to having a standard dependency resolution tool when Rust ships.
Sounds good. I'll make one tiny parting shot to your parting shot, and we'll be done. :)
> Using binstubs rather than "bundle exec" can be a good deal faster.
Absolutely, which is why we switched to them with Rails. It is a tough problem, though, and given Ruby's constraints. In my testing, it's the startup time that's the issue, not Thor, which is because `bundle exec` has to re-start the interpreter multiple times, and binstubs don't. Anyway.
> I just hope that performance is a core design consideration for Cargo
Rust people already feel the pain of very long rustc compiles, so while I'm not sure that it's an overriding concern, given the Rust world's concern with performance in general, I expect it to be way better. Ruby has always kinda thrown performance to the wolves.
> I legitimately just want to make sure Cargo comes out awesome.
We all do. And I'll admit to being a bit sensitive to 'lol bundler,' which I feel is often said without fully understanding the problem space, which is admittedly large. Not that you are doing that, but I have seen similar comments elsewhere. Once you explain the details, it's pretty clear why Bundler does what it does.
Anyway, yes: let's make Cargo 1.0 and Rust 1.0 super awesome! I'm really excited that we're taking this step forward. It's a huge day for Rust.
> What TOML does or doesn't do is now a matter that needs to be worked out with Tom.
For what it's worth, I've just been brought on as a maintainer for TOML proper. Tom is still the guy though. Putting aside my involvement with Rust (<3), I also maintain the TOML library for Go, and I want to see an expedient path to 1.0 with minimal breakage. (There are lots of folks in the Go community already using TOML for things, including Mozilla's Heka project and CoreOS's `etcd` project.)
I saw that, based upon Steve's linked issue elsewhere in this thread. I think that's great and would go a long way in addressing some concerns here. Moving it to an organization is reassuring as well. Thanks for the update!
YAML spec is also super complicated and writing parser for it is non-trivial (for comparisson YAML has ~150 EBNF rules while XML with DTD has about ~80)! I'm not comparing validation rules but they are probably about the same.
This complexity means there is already a parser for TOML, and not one for YAML. That's IMO main reason they went with TOML.