Hacker News new | past | comments | ask | show | jobs | submit login
Cargo, Rust's Package Manager (crates.io)
229 points by wunki on June 24, 2014 | hide | past | favorite | 120 comments



This is sweet:

For example, if I have three packages:

   - uno depends on json 1.3.6
   - dos depends on json 1.4.12
   - tres depends on json 2.1.0
Cargo will use json 1.4.12 for uno and dos, and json 2.1.0 for tres.

Hopefully rust builds a culture that respects semantic versioning better than the Ruby & Node cultures do. That has to start at the top. There were several Rails 2.3.X releases with minor ABI incompatibilities. Truly respecting semver would have required these patch level updates to get a new major number.


Note that this is a sliiiightly modified SemVer: in SemVer, these three things would conflict. It would be strict SemVer if uno depended on ~>1.3, and dos on ~>1.4.

We hypothesize that this difference works better for AOT compiled languages. And since it's pre-1.0, it's all good. This is the 'we'll watch this closely and adjust' from the email: it might not be okay.


Why would uno not work on 1.4?


When you declare an x.y.z dependency, tools that use SemVer (like Bundler, which is the biggest influence on Cargo) assume x.y.(>z). So, a 1.4.12 dependency says "Anything greater than 1.4.12, but less than 1.5.0."

When you declare an x.y dependency, tools that use SemVer assumes x.(>y). So, a 1.4 dependency says "Anything greater than 1.4.anything, but less than 2.anything.anything".

This is assuming the ~> operator, if it was =, it would ONLY be 1.14.2, which is even more restrictive.

The project _should_ work, which is why Cargo is using this modified version of ~>.


Don't equate Rails with Ruby (some projects follow semver very closely).

Rails follows a shifted semver version, as documented here: https://github.com/rails/rails/blob/master/guides/source/mai...

Bumps from 4.x to 4.y might contain smaller breaking changes.

For a large project as Rails, I find that reasonable, otherwise, we'd be at Rails 20 by now, which also doesn't quite give a good sense of how the project evolved.


Rails, for better or worse, is the flagship project of Ruby, and shapes the Ruby culture quite strongly.

I would much prefer Rails 20 than the current situation. If you want to make a major marketing level move, introduce a codename or something. Separate marketing from semantics.


MRI didn't follow semver up until recently, and even now it has some caveats about it.

Ruby as an ecosystem doesn't really care for semver. Some projects follow it anyways, which I can respect, but they aren't the norm.


MRI picked its version scheme in the 90s, long before "semver" was a thing. That kind of version scheme wasn't unusual in that time. Problems due to legacy don't make a good argument.

The Ruby system cares for semver in general and it is propagated there a lot, but I agree: it certainly isn't uniform.


MRI still doesn't follow semver but core doesn't really give a crap.


They claim "Semantic Versioning type" versioning, which is, of course, not SemVer, but I think what your parent was referring to.

https://www.ruby-lang.org/en/news/2013/12/21/ruby-version-po...


What would be fantastic is if a Cargo repo could refuse a package which doesn't follow follow this policy (try to upload a new minor version with an incompatible ABI, get 400 Bad Request as a response). Of course, it won't save you from a change in semantics, but it would already be a good step forward.


I'd be very curious to know if it's possible for an automated tool to perfectly detect ABI breakage.


It would be impossible to do so perfectly for a turing complete language. You could get close by running the previous minor version's test suite against the new update, but that would essentially make the package repository into a continuous integration server, which is expensive to maintain. There's another, less perfect way of detecting API breakage, which is to use the rustdoc tool to export the projects documentation as json. This would let cargo detect whether items had been added, removed or renamed, which covers a large number of cases in which API compatibility is broken. If it encourages developers to up the version number rather than fix the flagged incompatibilities, then it will also drastically reduce the number of incompatibilities in the package repository that can't be detected by rustdoc. While it's impossible to completely eliminate the human element in upholding the version number contract, software can limit the number of errors in the system.


> which is to use the rustdoc tool to export the projects documentation as json.

Side note: it already does this by defaut: http://doc.rust-lang.org/search-index.js

I'm not sure if that's enough information to do this analysis, but the basics are there! I've opened an issue: https://github.com/rust-lang/cargo/issues/47


How hard would it be to get usable type signatures out of the compiler as well? It seems like diffing two sets of type signatures would capture everything but purely semantic api incompatibility. Also seems like "here's a machine-readable version of all the types in this package" is a useful thing for a compiler toolchain to produce.

Because, of course, Mozilla is paying you to sit around on your hands.


It shouldn't be that bad, given the need for the compiler to export an AST.


Npm require modules to follow semver. https://www.npmjs.org/doc/package.json.html#version

Now sure why you think the culture there doesn't respect semver.


Requiring a version number to be within the semver format is one thing; but abiding by the rules of semver is another thing entirely. You'll see plenty of Node packages break compatibility on minor or patch updates. It's not something I see as extremely important in that community.


I don't expect that since the version field is optional.


I'd love if

$ cargo cult

would build a new project/module from a template.


The plan is for that to be `cargo project`, but that might be a fun alias. :)


Yes, absolutely. Cargo cult doesn't have the best reputation, but this would be an opportunity to cherish it :-)


And here is the announcement from Yehuda Katz:

https://mail.mozilla.org/pipermail/rust-dev/2014-June/010569...


Will this play nicely with the package management tools OSes already have, or is this going to end up being yet another source for files/packages to accumulate that are outside the view of well-documented and designed administrative tools?


Ha! There is nothing well-designed about the mutually incompatible, political hell that is OS package managers.

The best solution (as taken by npm, virtualenv and others) is to install libraries locally to the project that is building them.

That way, package management becomes the sole concern of the build system.

"Accumulation" is a good thing, it means each project has the exact version of a package that it was tested with, not some later version that a sysadmin decides it "probably should work with".


OS package managers make two assumptions:

- that packages follow semver

- that the OS packagers are in a better position to test package combinations.

If the author releases a new version of libfoo, and A, B and C in an OS repo depend on libfoo, then the OS packagers do not release a new version of libfoo until the tests for A, B & C pass.

These are two good assumptions, and the language package world would be in much better shape if they followed those assumptions too.


Which OS are "OS package managers"? There is a world out there greater than GNU/Linux.


Cargo assumes the former.


Which is why we have testing and staging environments to make these changes on first. Conversely, when everything is pulling in their own version of a library, you run into situations like the zlib buffer overflow, where you had huge numbers of programs that needed to be rebuilt because no one used system-packaged libraries. Obsolete and vulnerable software is a liability, and not having tools which can readily tell what software needs to be updated makes quite a few peoples' jobs quite a bit harder.


You can't have it all.

Either you rely on system librarys OR every binary / library pulls in its own copy of its dependencies (npm model).

The latter lets you have multiple versions of the same library for different dependencies, without confusing your system package manager.

The former might mean 'less build time', but it's pretty much whats wrong with the C/C++ ecosystem; you can't track the system libraries from the package tool, and you get out of sync.

Pillow and freetype specifically pop to mind as an irritatingly broken recent example; when freetype upgrades, perfectly working python projects suddenly stop working because the base system freetype library has been upgraded; and the pinned python libraries that depended on the previous freetype version no longer work, because they rely on the system package manager not to do stupid things.

It would be nice if you could point cargo as a 'local binary cache' to speed things up, and make them work even if the central repository goes down; and that could be package manager friendly, I imagine.


I'd say look at the BSD ports system for an example of a system done well, and is extensible to such issues. If your app depends on projects with irresponsible developers that make unannounced API/ABI changes and the like, the general format of the ports system, and tools for maintaining private repositories like portshaker and poudriere, make it easy to create ports that let you have the best of both worlds. To add to this, one can use metapackages and the like which don't have any files in and of themselves, but will point to the latest version. So, for example, you create a port named libbar, which is always going to get you the latest and greatest, but you also have libbar-1, libbar-12, etc, that you can use when you need an explicit dependency. Additionally, there are ports of the system library -- heimdal, openssl, etc, so you can have a stable base system, but still have more recent versions around for your application. Most of the issues people have with packaging these projects seems to be much more based around inexperience with good packaging tools and practices, rather than the idea of system packaging in and of itself.


Welcome to emerge @preserved-rebuild and revdep-rebuild. Still no better way has been found to date :/.


I think it would be just as easy to package rust programs using OS native package management as anything else, and I'm sure packages will be made for popular things for common package managers. But OS native package managers are a royal pain to use when actively developing on a project that has some dependencies, and bespoke Makefiles are a very imperfect solution.

Flipping your question around a bit: will package manager creators and maintainers ever develop better solutions to the use case of development rather than system administration so that we don't have to keep creating these language-specific tools?


This is an insightful point. Nix is the only package manager I'm aware of that seems like it could fit the bill: https://nixos.org/nix/


They already have automated Haskell packages from cabal, it shouldn't be hard to integrate cargo with nix once it's more stable.

You wouldn't even require upstream NixOS packages, just place built cargo packages in the nix store, using it like a cache. Then upstream NixOS channels could start accumulating cargo packages, making cargo dependency "builds" faster.


Yeah, nix does look pretty awesome and aware of (even actively designed for )both the system administration and development use cases. If it becomes the native package manager for some popular operating systems, I may have to eat my words about language-specific solutions.


Have you taken a look any at the FreeBSD ports system? It uses makefiles to manage pretty much any software install you can think of. Additionally, there's a pretty good infrastructure there for managing custom build trees, etc. Additionally, they also have things like BSDpan that let you use arbitrary Perl modules with the package management system.


I've used ports, but I've never developed a project using it, so maybe I should try that out. But "uses makefiles" does not make me particularly optimistic for its pleasantness. I really prefer (and think it's been proven possible to build) declarative, rather than imperative, systems for managing dependencies.


Until the various OS package managers stop pretending they are a universe unto themselves, this problem will continue. Languages have to distribute libraries across all of their host systems, so focusing on support for any particular package manager (or small subset thereof) doesn't buy much.

Hopefully the cargo team can come up with a solution that works a little better here, but I wouldn't hold my breath.


Each language having its own tools is fine and I doubt OSes are against that. Those tools just need to support a very specific set of features or scenarios to make packaging easy:

Understand that enterprise OSes are not built on developer Macbooks. Enterprise distros have reproducible builds on automated systems with chroots that contain only what the package needs, no network access and sometimes the build happens inside a virtual machine. Its is almost ridiculous how after maven forgot the topic of being "offline" almost every tool released afterward has done the same mistake.

Understand that Linux distributions sell support and that means being able to rebuild a patched version of a failed package. So whatever dependencies are pinned in Cargo.toml or Gemfile is irrelevant. The OS maker will override them as a result of testing, patching or simply to share one dependency across multiple packages. Distros can't afford having one package per git revision used on a popular distro and then afford to fix the same security issue in all of them.

So having "cargo build" be able to override dependencies and instead look on the installed libraries with a command line switch or env variable helps the packager not having to rewrite the Cargo.toml in the package recipe.

Maven was probably the first tool that made packaging almost impossible and completely ignored the use-case of patching a component of the chain and be able to rebuild it completely.

Semantic versioning is great news, because it allows you to replace the dependencies for a common semantic version (not exactly the same).

For integrating with the C libraries not much needs to be done. If you support pkg-config then you cover 85% of the cases.


I had the impression this fills the same role as bower, mpm (non global) dependency or plain custom vendor dirs handled with your favourite git module paraphernalia: your build is the place your dependencies live (+ possible cache directory somewhere else). And, these you use this packaging system to manage your build, not the deployment of your (iirc statically linked) build artifacts.

While technically the cache directory is a place where files can accumulate outside the view of a well documented and designed administrative tools, this is common problem shared with many tools including your favourite browser.


I haven't been following the development of this package manager, but previous attempts at making a package manager for Rust have failed. Is this package manager supported officially now? I really hope it will stick around.


Yes, the developers are actually domain experts being paid by Mozilla. The release of this website also coincided with the move of the source repository into the rust-lang organisation: https://github.com/rust-lang/cargo .


Yes, it will be supported. The Bundler guys were specifically contracted by Mozilla to sort out the mess. They are actually using Rust in production at the moment, so the have a big stake in its future.

The quiet nature of the development process of Cargo was actually a response to the previous package management failures. The idea was not to publicise heavily before it was ready for dog fooding. This seems to have paid off.


How do you pin a dependency to a particular version or git sha? I can't find anything in the FAQ or docs that implies that it's possible.


From announcement email:

The next features we're planning on working on are: - Supporting refs other than `master` from git packages


As a workaround, I guess you could fork a specific version on your own github account, and use that as the dependency?


You don't even need to fork it on Github, Cargo is just passing the URI straight to the git tooling.

You can keep it on your local filesystem and reference it w/ `file:///path/to/repo.git` -- you can also use SSH and HTTPS URIs from any other repository host, not just github!

So you could just clone the repo to some `vendor` directory and move `master` to whatever version you want!


The Rust folk are adamant about supporting versioned libraries out of the box, so support for this will definitely land before 1.0.


This looks like yet more awesome stuff coming out of the Rust camp.

I'm pretty excited to see Teepee and Rust come together so I can really give it a spin doing what I'm currently doing daily for a job.


While I support semantic versioning, people need to be aware that it's only as good as the package maintainer. I have used packages that have (unintentionally) broke semver conformity. Nothing really stops an author from releasing breaking changes when going from "1.2.1" to "1.2.2".


The goal is to use package manager defaults and community pressure to keep things on the straight and narrow. We'll see how it goes!


This is very welcome news, indeed. I will have to give it a try as soon as I can make the time.

I hope it will be more stable and work better than the Haskell package manager, Cabal. I literally never got that to work on any machine. It would typically destroy itself while attempting to update itself...


I would really love to see some docs on how to actually install and get started with Cargo.

It doesn't ship with Rust and the docs on GitHub and crates.io are not very enlightening.


Yesterday started my contract with Mozilla to write documentation. Today, I'm starting on re-writing the tutorial: https://github.com/rust-lang/rust/pull/15131 (apparently, bors is a bit backed up)

The new tutorial will be based around 'real' Rust development, and so will assume Cargo.

That said, http://crates.io/ should have install instructions on the site. I'll open a ticket and get on that.


This was how I installed it (Mac):

  1) Install latest version of Rust found here: http://www.rust-lang.org
  2) git clone --recursive git@github.com:rust-lang/cargo.git
  3) make
  4) make install (could be that sudo is needed for you)


I'm putting rust in $HOME/bin (and other self compiled software). You can change the default installation from /usr/local by running

  DESTDIR=$HOME make install


There is no brew formula?


For rust? There is...cargo not so much

aroch:~/staging/|⇒ brew info rust

rust: stable 0.10 (bottled), HEAD

http://www.rust-lang.org/

/usr/local/Cellar/rust/0.10 (74 files, 174M) *

  Poured from bottle
From: https://github.com/Homebrew/homebrew/commits/master/Library/...

aroch:~/staging/|⇒ brew info cargo

Error: No available formula for cargo


Rust has a brew package. Cargo is new so it may be a day or three before it gets out there too.


Hmm, well, rust's brew seems a little outdated (0.10), so if you are intent on using Cargo soon, I would recommend on building rust yourself or grabbing a new binary from their website.


There is a homebrew-cask of Rust's nightly binary. I think this should work:

  brew tap caskroom/cask
  brew install brew-cask
  brew tap caskroom/versions
  brew cask install rust-nightly


Stuff being either in Cask or Homebrew is just terrible. Homebrew also has a versions tap. Those two projects should combine efforts and remove ambiguity.


I 100% agree with you. It took me like an hour to figure out where I thought it made sense to put rust-nightly, and I'm still not really sure I did it right. But it works and is way better than the morning compilation cronjob I used to use.


I agree, it works, but we need to keep pressing those guys... or contribute. Cask still doesn't have reinstall/upgrade. As far as I know, Homebrew can't upgrade packages with head versions, and, worst pain of all - Homebrew doesn't support Yosemite or any unreleased OS X version. Being a tool for developers primarily, all the above are must-haves!


Learn two things:

1. Wycats (Yehuda Katz) is on Rust apparently :)

2. `.toml` -- some crossbreed YAML/INI file format that I like


It'd be really nice if it didn't impose a hard choice of json-ish to use for the manifest, even though that's a rather small thing. Just don't want to be stuck with toml if it doesn't gain wider acceptance than it has so far.


What about binary only dependencies?


I think mixed is also very important. For example, sometimes you really do only have a binary dependency (e.g., .NET assemblies), while other times, you may have a binary + source dependency (e.g., library + header files).


This should absolutely be handled in some good way. I think right now you'd just set the script attribute to something empty, but that's obviously a hack.


This is good news. a better alternate to conventional techniques


If someone can log into github and enter a ticket to say no to toml. Yaml would be perfect for it and it is mature and people already know it.


I don't have strong feelings about toml, but the YAML spec is incredibly complicated, and has way too many features for a config file format. And security vulnerabilities O_O


I don't know about the security vulnerabilities, but it works fine as a config file format (we use it at my company for a lot of in-house stuff). I had a similar reaction to the language. Even if not YAML, why not just use JSON? It's universal, dead simple to use and understand, has extensive libraries in just about any language, etc...

That said it's not that big of a deal. At least it's not an in-house markup like Haskell's cabal...


> I don't know about the security vulnerabilities,

About 14 months ago, it caused some of the most serious vulnerabilities in the Ruby on Rails world ever: http://tenderlovemaking.com/2013/02/06/yaml-f7u12.html

> why not just use JSON?

JSON is not really human-editable. Those quotes and commas, ugh! Also, JSON lacks comments.

The vulnerabilities in YAML (which is a superset of JSON, by the way) point at why YAML and JSON both aren't appropriate for configuration: they are _serialization_ formats. Configuration isn't what they're built for.

And you're right, it's really just not a huge deal in any way. Especially once we have `cargo project` to autogenerate the basics.


> About 14 months ago, it caused some of the most serious vulnerabilities in the Ruby on Rails world ever: http://tenderlovemaking.com/2013/02/06/yaml-f7u12.html

Live by eval, die by eval. But more seriously, nobody is forcing a Rust YAML library to support arbitrary structure deserialization (or maybe as an optional switch). I don't think you'd want such a switch on in a build system configuration file.


Then you're not supporting YAML, you're supporting your own subset of YAML.


That's one way to look at it. On the other hand, when a format presents useful, but potentially dangerous characteristics (eg, XML entities expansion), it is entirely sensible to offer a way to not take them into account.


Quite fair. Depends on what kind of tradeoff you're looking for: this personally makes me search for a new format. It's reasonable to make a different choice.


I don't even think that's fair. What happens when everyone implements their own pet subset of a standard?


It's not an hypothetical future, you can often disable dangerous features in XML parsers. It seems sensible to me to do the same with YAML parsers.


As a data exchange format, JSON (understandably) has no comments in its grammar. This is a big problem for config files. TOML is actually designed for this kind of thing.


Why not just use JSON?

Infact, why not just use npm's package.json?


I may not be a majority here, but I see JSON as a data transportation format, while I see TOML or YAML as configuration formats.

You cannot write comments in JSON, for instance.


You're not alone. YAML or similars are the right formats for these things, not JSON. Less clutter, comments separated from data, easier to read (by humans).


Example from the website in json

    {
      'package': {
         'name': 'hello-world',
         'version': '0.1.0',
         'authors': [ 'wycats@example.com' ]
      },
      'bin': {
        'name': 'hello-world',
        'comment': 'the name of the executable to generate'
      }
    }
So where is the problem ?

And you have the advantage, that other tools can use the complete file including the comment. In TOML you need an extra parser to grab the comment.


I don't really care what format the cargo files use (that's just an inconsequential bikeshed as far as I'm concerned) but in your example "comment" is really not a comment. I would expect to be able to put free-form comments anywhere I want and have them thrown away by the parser.

Your solution works fine for docstrings, but comments and docstrings are not the same thing (although many languages that don't support docstrings in the syntax hack them together using comments, admittedly).

But beyond that, what's the argument for switching to json? Is there some kind of intercompatibility with npm/Node.js to be gained?


Comments aren't just used to put notes in the file. They're also used to comment out code.

Also, having to quote everything makes writing JSON by hand a pain. Why would you want to use it over a nicer format?


It's sad to see that the guidelines about downvoting are clearly not respected.


YAML is a serialization format, not a configuration format.


It's really a configuration format that mistook itself for a serialization format.

What I wouldn't do for a cut-down YAML standard with most of the serialization crap cut out.


There isn't a lot of attachment to the exact manifest format used, but using json is clearly a step backwards!


JSON is annoying with the commas. One forgotten comma or comma too much and the whole file doesn't parse.


Because comments.


json is only marginally easier to edit by hand than xml. also, comments.


Yeah, why use some format that no one knows, and is harder to parse by other tools?


Not only that, but TOML is still considered unstable by its author. Granted, it hasn't been updated in several months. But that seems like a shaky foundation to be building on top of.

From: https://github.com/toml-lang/toml

"Latest tagged version: v0.2.0.

Be warned, this spec is still changing a lot. Until it's marked as 1.0, you should assume that it is unstable and act accordingly."


While this is true, NOTHING in Rust-land is stable yet, so it's not as big a deal.


That's not really carte blanche to build everything on unstable technology. The point is things are supposed to be converging on stabilization. And a lot of what's unstable now is under the direct control of Rust. What TOML does or doesn't do is now a matter that needs to be worked out with Tom. It's not confidence-inspiring in the least.

If the plan is to jettison TOML, then it's simply just an odd choice to use for a first cut. And from a purely perception manner, seemingly reaffirms concerns some have had about the bundler team building cargo (right or wrong).


The plan isn't to jettison TOML, but even though it's 'unstable,' it hasn't changed in a very long time. And, given, uh, Tom, I doubt it will very much. He has better things to do these days.

I can see an argument for INI files, but TOML is basically INI with some extensions. And YAML (which has no Rust parser, and nobody who wnats to write one) and JSON (which does have one in the standard library) are very poor configuration formats.

Which one would you have preferred? It seems like a reasonable choice to me.

> And from a purely perception manner, seemingly reaffirms concerns some have had about the bundler team building cargo (right or wrong).

You could just spell out the personnel issues rather than being all FUDdy about it. (I'm friends with Yehuda, but also was room-mates with the current maintainer of Bundler (until very recently). Bundler isn't perfect, but lots of that had to do with people assuming a pre-1.0 project is stable, and upstream bullshit with RubyGems.)


That's a fairly weak argument. If it's not going to change, then maybe some outreach should be done to get him to tag it 1.0. Otherwise you're opening yourself up to the exact same issue you outline as an issue for Bundler -- people assumping a pre-1.0 project is stable. As long as there's the option for someone to say "this is pre 1.0 so I can change it whenever" it's going to cause concern because most of us have been burned by that several times over. A de facto 1.0 release vs an actual 1.0 release.

As for the FUDdiness, there was another extensive comment thread here on HN when Cargo was announced. I didn't feel it necessary to re-enumerate all those concerns. And to the best of my recollection, the problems listed were all technical in nature, not personal. For my part, my list was:

https://news.ycombinator.com/item?id=7421629

But, I'm hardly alone in being concerned. This is something that will drastically influence the ecosystem. So, it should be met with some level of scrutiny. If it comes out passing muster, great. But that's not going to happen by being glib about things.


> If it's not going to change, then maybe some outreach should be done to get him to tag it 1.0.

Absolutely. Let me make an issue about that: https://github.com/rust-lang/cargo/issues/46 If he doesn't respond, I will email him.

> As for the FUDdiness...

Thanks! Since the points are enumerated, I can refute them:

> * You can't override a dependency:

You can, in modern Bundlers. But the transitive dependency issue is inherent to Ruby, not Bundler. Cargo will work like npm in this regard, not like bundler.

> Dependencies can disappear on you:

The central repository isn't built yet, but this is very valid. The reason that `gem yank` is still available to everyone is that you'd be surprised how often people push proprietary stuff up to RubyGems. It was removed for a while and caused a significant burden on the RubyGems team.

Regardless, yes, this is a serious consideration.

> It has weird rules:

Totally fair. Let's learn from that and make Cargo not have weird rules. :)

> * It promotes bad practices:

Yup, that's a Rails thing. Rust should be way better, as it doesn't have this kind of issue.

> * It's slow:

This actually has just as much to do with RubyGems as it does Bundler. See all of the presentations Andre has been doing recently, and the bundler-api project.

> * It was designed for MRI:

Quite fair. We only have one Rust implementation so far, but `cargo exec` won't be necessary.

------------------------

> So, it should be met with some level of scrutiny.

Absolutely! I don't mean to say "just take this without complaints." But without specific, actionable complaints, it won't get fixed. Please make as many specific, actionable complaints as possible, especially now, pre 1.0.


I think we wound up on the same page, even if a circuitous route. So I don't want to continue beating up on bundler. But I guess I'm going to take one more parting shot and mention that a lot of its execution performance is due to thor as well (at least when I've profiled). Using binstubs rather than "bundle exec" can be a good deal faster.

I feel compelled to mention it because this seems to be a clear case where an aesthetic DSL was chosen over performance and it can't really be fixed without a backwards-incompatible change. Rust and Ruby are two different beasts and I get that. I just hope that performance is a core design consideration for Cargo (I have no idea if it is or if it's just a nice-to-have).

Airing dirty laundry is hardly ever pleasant. I'm certainly not immune to my own set of WTFs. I legitimately just want to make sure Cargo comes out awesome. And I much appreciate Mozilla's commitment to having a standard dependency resolution tool when Rust ships.


Sounds good. I'll make one tiny parting shot to your parting shot, and we'll be done. :)

> Using binstubs rather than "bundle exec" can be a good deal faster.

Absolutely, which is why we switched to them with Rails. It is a tough problem, though, and given Ruby's constraints. In my testing, it's the startup time that's the issue, not Thor, which is because `bundle exec` has to re-start the interpreter multiple times, and binstubs don't. Anyway.

> I just hope that performance is a core design consideration for Cargo

Rust people already feel the pain of very long rustc compiles, so while I'm not sure that it's an overriding concern, given the Rust world's concern with performance in general, I expect it to be way better. Ruby has always kinda thrown performance to the wolves.

> I legitimately just want to make sure Cargo comes out awesome.

We all do. And I'll admit to being a bit sensitive to 'lol bundler,' which I feel is often said without fully understanding the problem space, which is admittedly large. Not that you are doing that, but I have seen similar comments elsewhere. Once you explain the details, it's pretty clear why Bundler does what it does.

Anyway, yes: let's make Cargo 1.0 and Rust 1.0 super awesome! I'm really excited that we're taking this step forward. It's a huge day for Rust.


I would like to take the opportunity to say that your fast, no-bullshit answers (and actions) are appreciated (in general the Rust community seems to foster this sort of positive attitude, carry on).


Thanks. I have a bit of an... obsessive streak when it comes to these kinds of threads. It works out, though it also means I didn't get a whole lot of other stuff done yet today...

I hope we can keep the Rust community mega positive and no-bullshit. Please let me know if I'm ever not being so.


> What TOML does or doesn't do is now a matter that needs to be worked out with Tom.

For what it's worth, I've just been brought on as a maintainer for TOML proper. Tom is still the guy though. Putting aside my involvement with Rust (<3), I also maintain the TOML library for Go, and I want to see an expedient path to 1.0 with minimal breakage. (There are lots of folks in the Go community already using TOML for things, including Mozilla's Heka project and CoreOS's `etcd` project.)


I saw that, based upon Steve's linked issue elsewhere in this thread. I think that's great and would go a long way in addressing some concerns here. Moving it to an organization is reassuring as well. Thanks for the update!


The read-manifest command seems to output a JSON of your Cargo.toml if you have tools you'd like to use that require it:

    cargo read-manifest --manifest-path .


I think that's a good point. Why use .toml when we have better alternatives already?

(I admit I have a bias against toml, but still...)


Which ones would that be?


YAML is quite mature, clean, is a superset of JSON so people could use JSON if they wanted, and it's used for config files for pub, Dart's package manager. TOML doesn't seem mature enough.


TOML is extremely simple. YAML on the other hand is incredibly hard to implement. JSON has no comments and is not really that nice for humans.


How about s-expressions :) ?



That would be my choice, but TOML is good enough for me :)



toml is unstable and has no real spec + YAML has a well defined spec and is widely used?


YAML spec is also super complicated and writing parser for it is non-trivial (for comparisson YAML has ~150 EBNF rules while XML with DTD has about ~80)! I'm not comparing validation rules but they are probably about the same.

This complexity means there is already a parser for TOML, and not one for YAML. That's IMO main reason they went with TOML.


There are actually already MULTIPLE parsers for TOML in Rust, and Cargo switched between the two yesterday.

As you say, this is a testament to TOML's simplicity.

EDIT: Furthermore, TOML is going to have a 1.0 soon: https://github.com/rust-lang/cargo/issues/46




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: