I don't think that cargo's decision to always consult the .toml was actually motivated with security in mind. Instead I think cargo's authors just wanted to avoid the hassle for developers of having to constantly invoke commands to update the .lock file (is this what "convention over configuration" means?). So I think this is an instance where a choice done for better ergonomics actually ended up improving security as well.
Note though that even with current cargo, any PR that claims to do a simple cargo update could very well not be such a PR. It could e.g. include an "update" to a yanked and possibly vulnerable/backdoored version. One malicious upload, even if yanked, could be milked for a long time. See repi's comment in this thread for a solution: https://news.ycombinator.com/item?id=21887445
We run this in GitHub Actions for all of our Rust repos, primarily to prevent banned or duplicate dependencies and approved licenses. But also works for verifying the sources of crates to prevent this specific issue now
I always set up an ignore rule for them and use package.json's dependency list instead. My projects are all on the small side though; is the idea to commit the .lock file and avoid breakage through things like minor version updates of your first level dependencies? Wouldn't that carry a bigger security risk for large numbers of packages that then aren't receiving patch updates as readily?
The lock file records the hash of each package and uses it to verify package integrity on future installs. If any of the hashes don't match up, the `npm clean-install` command will throw an error.
The NPM ecosystem commonly uses these types of specifiers. Even if your own dependencies are all specified in “package.json” using exact versions, they almost certainly depend themselves on other versions (dependencies of dependencies, and so on) using the semver specifiers. That means whenever an author of that package publishes themselves, your resolved dependencies may change.
The package-lock.json is simply a record of the exact dependency resolution graph based on the registry state that existed at the moment you generated it, so that you can reliably reproduce that graph a month or a year later. Otherwise it’s very common for NPM-based projects to have “NPM install” fail later on (or succeed, but create a bug or unit test failure) because some new version was added to the registry and created a compatibility issue or bug.
But package-lock is intended to be used at the project level. i.e. lock all package versions in the current project or app.
That isn't "useless in most cases", it solves specifically the case it is meant to solve.
> It’s even more useless if you think that production apps are built very often as tagged docker images = are reproducible by design.
But you're stuffed if you need to rebuild the docker image later, or for example, go back to an old commit/release, create a 'support' branch to retrofit a change and build a new image. You want repeatability all the way down.
No, a much more useful use case is avoid breakage from minor version updates in your 15th level dependencies. You might be cognizant of your semver best practices, but the team who publishes the string utility library included by the query builder included by your ORM might not be so careful.
I think you mean "Yes, but also ..."
They didn't say "only", they said "things like".
1. Ultimately dependencies end up in your production environment, so folks might wanna review the diff of dependency updates.
2. No need to run yarn install. So if your build server has only access to your git repo, no need to get packages from a remote environment.
3. No need for yarn or npm on non-Frontend dev machines. It will just work with Node installed.
4. If npm or yarn ever cease to exist (or the registries), any commit should still work as intended.
So if that part of the infrastructure is down, you cannot work. A repo might be more error resilient, because if you check it out, you check it out.
Edit: a minor version update should not be backwards incompatible and break things. If it does, the package is poorly maintained imo.
You can pick your poison.
Each one of those exact versioned libraries you referenced will itself likely have semantic version dependencies which may update themselves. Unless every single dependency you depend on also follows the policy of pinning to exact versions (and their dependencies do the same, and so on), you are still vulnerable (See left-pad incident, which was a sub-dependency of babel).
The only way to avoid this is the old shrinkwrap or the new package-lock.json feature.
The author is making a pr that updates the package.json, which also means the lockfile is updated.
If you don't accept any PR that touches the package.json file, then that means you're not accepting a PR that introduces any new dependency, tries to update a dependency for security reasons, or uses a feature available in a newer version of the dependency.
All of those reasons seem like valid reasons to accept a lockfile change -- that the author of the PR changed the package.json file for any of those reasons.
As for updating dependencies for security/features, we leave that to dependebot.
Sure, if a package absolutely needs exact dependencies for its entire tree, it can check in the lock file, but I've not found this necessary in practice provided I use dependencies I trust , that follow semantic versioning.
Not sure you're talking about the same thing as everyone else.
There's a big difference if you're maintaining a package vs maintaining an app.
If you're a "package" maintainer, you don't want to pin dependencies. Because the package consumer (i.e. people building apps using your package) should not have their exact versions dictated to them.
If you're an "app" maintainer, you absolutely need to check in your lockfile, because you should care about repeatable builds.
NPM's package-lock.json https://github.com/search?q=filename%3Apackage-lock.json&typ...
Yarn's yarn.lock https://github.com/search?q=filename%3AGemfile.lock&type=Cod...
Bundler's Gemfile.lock https://github.com/search?q=filename%3AGemfile.lock&type=Cod...
This is not true for a lockfile, where the whole point is to capture the specific versions at the point in time that the generation is done.
The benefit of the contributor committing the lockfile is that it encodes the exact combo of dependencies that worked at the time the related code changes were made (in the same PR).
This means other project maintainers aren't left scratching their heads trying to figure out why a PR worked on your machine but fails in CI.
We expect lockfiles to change when we update it add dependencies. If a pull request cannot build with a lockfile generated when it is merged then that change should not be merged.
This isn't true at all. You can't predict when a new, potentially breaking, version of a dependency might get published. It could happen a second after you generate your lockfile, or create your PR.
At time of PR creation we could have versions 1, 2, 3 and 4 of our transitive dependencies.
At time of merge we could have versions 1.1, 2.2, 3.3 and 4.x available.
Dependencies are outside of your control, so always "smell", that's why it's crazy to do anything other than pin the heck out of them.
For diff viewers that are highlighting entire lines for changed content, the difference in the package name wouldn't be spotted in reviews when also changing the version or some other part of the URL.
Or am I missing something?
NPM does not care about whether the package name in the tarball match to the name in lock or not.
So you can just point to tarball path to another package that has the similar name. and nobody with ever notice that.
And domain whitelist won't work in this situaltion either.
On the other hand, looks like a big missed opportunity from snyk.io! Feels like they could have taken the extra step and time to come up with a counter to this in their product, and use that post as both a way to showcase their product and the discovery.
Maven pins down the version for you if you followed the "best pactise", or you can override sub-dependencies' versions if you should so choose (via dependencyManagement), and also lets you control the actual repo to download from as well. And as a bonus, you get to share the binary across different projects if they are on the same machine.
Something npm doesn’t do, so not a great example. Simple apps having dependencies with lots of files is also a package ecosystem thing, not a package manager thing.
Doesn't do anymore. It definitely used to, I'd estimate around 2014-2015.
I blame the tiny JS stdlib and the resulting culture for this, rather than the package manager.
It's true that the package manager enabled that culture to flourish, but a package manager would have come about either way. And if npm had somehow disallowed massive dependency chains and 14 line packages, a competitor would have filled the gap and been adopted as a result.
While the node stdlib isn't all-bolts-included, I wouldn't exactly call it tiny either. What are you missing? I kind of blame the community for moments of stupidity like that leftpad thing just to pick one, if somebody had added some great extensions to the string class before that happened it would have been some other tiny issue.
Look at C. Libc is tiny and relatively useless compared to most languages. Things are rarely ever added relative to decades ago. But there are many high quality self contained projects with very thin dependencies. Example: sqlite depends on libc and a few syscalls. It's an extremely solid piece of work despite those constraints. What's your excuse?