pip freeze -r requirements-to-freeze.txt > requirements.txt
pip freeze > requirements.txt
And sure, beware of git urls being replaced by egg names in the process.
pipreq --savepath gen.requirements.txt /
Disclaimer: I contribute to it.
I have a branch that can handle Django, but I kept it private because it's not really general purpose to introspect Django's settings.py then read out data.
The way I've seen this successful in practice across many projects:
setup.py: specify top-level (i.e. used directly by the application) dependencies. No pinned deps as general practice, but fine to put a hard min/max version on them if it's for-sure known.
requirements.txt: Pin all deps + sub-deps. This is your exactly known valid application state. As mentioned, a hasty deploy to production is not when you want to learn a dependency upgrade has broken your app.
requirements-dev.txt: dev dependencies + include requirements.txt
The one thing I wish was part of this is that there was a record of conflicts and purportedly successful combinations. That is, you distribute a package that says "I work with foo>=1.0" but you don't (and can't) know if the package works with foo 1.1 or 2.0. Semantic versioning feels like a fantasy that you could know, but you just can't. Understanding how versions work together is a discovery process, but we don't have a shared record of that discovery, and it doesn't belong to any one package version.
This sense that package releases and known good combinations are separate things developed at separate paces is also part of the motivation of requirements.txt, and maybe why people often moved away from setup.py
For an example of what bootstrapping a full Plone 4 site via buildout entails, have a look at the (defunct) good-py project:
As long as one is able to manage to keep projects small, and in a virtual-env of it's own, managing "known good sets" (in buildout, or for pip) shouldn't really be too hard. But as projects grow, a real system for managing versions will be needed. As far as I know there are no good systems for this... yet. Ideally you'd want a list that people could update as they run into problems, so that if projectA runs fine with foo=1.0, and bar=1.1, maybe projectB discovers a bug in foo<=1.0.4rc5 and can update the requirement.
It's not a trivial thing (see also: All the package managers, apt, yum, etc).
I'm supporting both using some silly logic to pull in requirements.txt and supply that to setup.
However, manually maintaining a requirements file in addition to setup.py is quite tedious in the long run. It is much better to freeze requirements during the build process and use the generated requirements file for deployments. But, decent test coverage is key here.
I like the concrete-vs-abstract, library-vs-application ideas there. Most opinions on this overlook the distinction.
Not to say the PHP strategy is necessarily ideal (feels restrictive to me), rather just the comparison between the two. That's the sort of ecosystem that would make the abstract part matter more
I much prefer fpm for creating Linux distro packaged from a python distribution, since it can create debs.
I don't trust random scripts to generate packages for me, writing an rpmspec or a debian control file is hardly a challenge and I'd encourage more people to take the 5 minutes to do so than relying on tools like FPM.
Also, I assume you're talking about the bdist_rpm target that come from distutils.
The author's proposed method is basically the same as how php's composer does it, with its composer.json and composer.lock. Specify your application requirements by hand in composer.json, run composer install, and composer.lock is generated. Check both in so you can have consistent deploys. When you want to upgrade to the latest versions within constraints set by hand in composer.json, run composer update to pull latest versions, updating composer.lock. Run tests, and commit the new composer.lock if you are satisfied.
Composer merely cloned Ruby's Bundler and its Gemfile/Gemfile.lock in that regard. Which is a good thing. It's beyond puzzling that Python has spawned multiple dependency managers, none of which have replicated the same golden path.
However there are a few project that tries to solves this, but the fact that the Python community has not decided on one cripples any initiative to fix it.
The above is possible for Python as well; I sketched out an implementation which patches __import__ to handle dependency resolution by version, but.... I'm afraid it's a bit unpythonic.
foo ~= 1.8.2
bar ~= 2.4.1
I am not sure what ~= means in requirments.txt, but I'm gonna guess it means something like ~> or ^. With as system like that if everyone follows semver correctly we are fairly okay. The problem is that not everyone does and you have no guarantee that deploying the same code at two points in time t1 and t2 will produce the same application since one of the dependencies might have released new code.
You can still pip-install things within a conda environment, and conda can manage more dependencies than just Python dependencies (a common use case is managing R dependencies for a Python statistical workflow).
You can do
conda list -e > requirements.txt
conda create -n newenv --file requirements.txt
I believe that conda makes it easier to selectively update, but even if you don't enjoy those features of conda, the same two-file trick as in this post will work for conda as well, since you can use `conda update --file ...`. Conda's "dry-run" features are more useful than pip's as well.
The perfect feature for conda to add is the ability to specify alternative Python distributions, either by a path to the executable, or by allowing alternative Python distributions to be hosted on Binstar.
I can understand why Continuum wants conda to heavily influence everyone to use only Anaconda, but I think the goodwill of making conda work for any Python distribution would bring more to them than keeping it focused solely on Anaconda. (For example, I know some production environments that still use Python 2.6 and are prevented from updating to 2.7 -- and even if they did update, they'd need to keep around some managed environments for 2.6 for testing and legacy verification work).
However, this workflow has a little drawback. If you have a dependency not from pipy, e.g. `pip install git+ssh://github.com/kennethreitz/requests.git@master`, it won't work.
Pinto has two primary goals. First, Pinto seeks to address the problem of instability in the CPAN mirrors. Distribution archives are constantly added and removed from the CPAN, so if you use it to build a system or application, you may not get the same result twice. Second, Pinto seeks to encourage developers to use the CPAN toolchain for building, testing, and dependency management of their own local software, even if they never plan to release it to the CPAN.
Pinto accomplishes these goals by providing tools for creating and managing your own custom repositories of distribution archives. These repositories can contain any distribution archives you like, and can be used with the standard CPAN toolchain. The tools also support various operations that enable you to deal with common problems that arise during the development process.
This is how open source works. How do you think Node.js and Ruby got the capability? Do you imagine they sprout fully formed from hyperbole like "a dozen others".
Only thing I can think of is you'd track top-level packages in requirements-to-freeze.txt during development, while your deploy would use requirements.txt to get a deterministic environment.
In the case of requests[security], it installs some extra packages that allow for more secure SSL. http://stackoverflow.com/questions/31811949/pip-install-requ...
In this particular case, this installs 'pyOpenSSL>=0.13', 'ndg-httpsclient', 'pyasn1'. See: https://github.com/kennethreitz/requests/blob/46184236dc177f...
$ pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs ./venv/bin/pip install -U
Ought to be a first class command, though.
After running that the first thing I do is run the tests. Then I freeze, commit and push.
This is usually a good thing to do at the beginning of a new sprint so that more subtle bugs caused by upgrading packages can be teased out before releasing a new build.
For my projects that I release on pypi I don't want to use pinned dependencies, but for them I run periodic tests that download all dependencies and run tests so that I'm (almost) instantaneously notified when a package I depend upon causes a bug in my code (e.g. by changing an API).
With RubyGems/Bundler I love the ability to point to github repos, lock versions (or allow minor/patch versions to update), have groups, etc.
requiresments.txt and pip just feels, awkward and weird. Especially when combined with virtualenv, in comparison to Ruby this is just stiff and strange.
I've had nothing but problems with more complex packages like opencv and opencl as well.
I could say the same things about rubygems. With coding a lot of it is what you are familiar with. To me python and pip is clean and simple, ruby and rubygems is overly complex. But that's because I'm familiar with python, so yeah.
You can do this with pip.
The Gemfile contains the top-level dependencies that the app needs; the Gemfile.lock ensures that everybody developing or deploying is using the same gem versions for top-level and resolved transitive dependencies. Periodically one can `bundle update` to upgrade gem versions: http://bundler.io/man/bundle-update.1.html
It does continue to surprise me that distinctions like these are not handled by Python tooling, which has this absolutely sordid history around packaging, which continues aplomb in the wheel vs. egg wars... http://lucumr.pocoo.org/2014/1/27/python-on-wheels/
1. Point to github repos.
`pip install git+ssh://firstname.lastname@example.org/echweb/echweb-utils.git`
2. Lock minor versions
What is a group? My google search did not turn up anything relevant.
FYI groups allow you to group various dependencies. E.g. you have one group of dependencies for development, another for testing, and a minimal one for production.
: --process-dependency-links will permit you to pull in VCS repos as dependencies locally, but it does you no good when distributing packages for third-party consumption
Or maybe use the pip tools' standard: requirements.in for source and requirement.txt for compiled at least.
What more — pundle does not use virtualenv and install all packages to user directory ~/.Pundledir and import frozen versions on demand.
It have all nice commands like install, upgrade, info etc
Check it https://github.com/Deepwalker/pundler
Some posts on theory and use http://nvie.com/posts/pip-tools-10-released/
It makes managing the python packages and even the python versions quite easy.