
Pip -t: A simple and transparent alternative to virtualenv in Python - fzumstein
http://blog.zoomeranalytics.com/pip-install-t/
======
Galanwe
I don't really get what this is supposed to solve.

\- `virtualenv` works by creating an overlay of the `site-packages` in
`$VIRTUALENV`, and change the shebang of scripts to use absolute path of
python in the overlay. That way, not only do you get dependencies isolation,
but you can also call directly your programs from outside the virtualenv and
it will work.

What you propose is both bad practice (relative path in `$PYTHONPATH`) and
rather annoying (you need to be in a specific directory to run everything).
Trying to automate that will sure end up in pure hell!

\- As in "you need to `cd` to _this_ directory then run _that_

\- Or "WTF it worked on my machine when I was in this directory but now it
doesn't"

~~~
ehremo
Well as we mentioned, it's supposed to solve a situation where you don't want
to use all the overhead of virtualenv.

The things you care about may be different from what others care about: I'm
guessing don't care about Windows compatibility or you wouldn't talk about
shebangs, and others may not find only running the scripts from the project
root an issue (I don't for example).

Why is a relative path in the PYTHONPATH necessarily bad practice? It's not a
rhetorical question, I am genuinely curious.

If you're happy with virtualenv then continue using it - all we have done is
propose an alternative.

~~~
publicfig
I'm curious, what sort of overhead are you seeing with virtualenv that leads
you to use this/any alternative? I've used virtualenv + virtualenvwrapper on
some seriously underpowered and low-on-space machines, and have really not
noticed enough overhead to even make a blip.

~~~
AUmrysh
I'm pretty certain virtualenv just changes some environment variables.

I found a writeup [1] on how to write your own virtualenv. It doesn't look
like there's much overhead there.

I suppose one could justify not using virtualenv if they're on a system with
limited resources or network access. I've known people who spent hours
building a compiling python 2.7 from source on embedded systems because they
rolled their own distro, and in that situation it would make sense to not want
to mess with virtualenv if you just spent hours getting pip working.

1: [https://www.recurse.com/blog/14-there-is-no-magic-
virtualenv...](https://www.recurse.com/blog/14-there-is-no-magic-virtualenv-
edition)

~~~
mikemikemike
My impression is that the benefit of this approach is that it fits the mental
model of NPM and Bower. Being able to manage packages across your stack using
the same kinds of steps feels... organized. The relative path issue, however,
needs some more consideration.

------
majke
I'll use the opportunity to moan about the state of packaging in the Python
world. Why, after so many years, there is no way for me to ship software
written in python, in deb format?

I end up doing some real crazy stuff:

* build in docker

* using virtualenv /usr/local/mypackage

* then manually copy over the init scripts "cp /usr/local/mypackage/bin/* /usr/local/bin"

* and debianize "/usr/local/mypackage/" and "/usr/local/bin".

Except it doesn't work:

* I need root do create the venv in /usr/local.

* This technique copies garbage like "activate" script to bin.

* virtualenv ships its own python binary. If the python version on host differ, it won't work.

* The workaround is to rewrite the bin scripts.

Total mess. All I need is a deb with some bin scripts, and all dependencies
bundled. Is it _that_ hard...

~~~
mtrn
I tried a few things myself, like fpm[1], rpmvenv[2] and lately pex[3] - but
so far, I haven't found something, that I would call solid _and_ simple. We
have constrained deployment environments that do not allow most of the tools,
that would ease the process on a development machine. Basically, I need a deb
or rpm.

I have some hopes to wrap a pex into deb/rpm, but I would not call this
approach simple.

That's unfortunate since Python is a wonderful language for many data-sciency
tasks - Python makes things possible and pleasant, that would be a pain in
other languages.

* [1] [https://github.com/jordansissel/fpm](https://github.com/jordansissel/fpm)

* [2] [https://github.com/kevinconway/rpmvenv](https://github.com/kevinconway/rpmvenv)

* [3] [https://pex.readthedocs.org/en/latest/](https://pex.readthedocs.org/en/latest/)

* Another tool: [https://github.com/spotify/dh-virtualenv](https://github.com/spotify/dh-virtualenv)

~~~
jlees
We use dh-virtualenv and it's working out pretty well for us so far.

------
killerbat00
> Looking at Javascript, tools like npm and Bower provide the easy, reliable
> and powerful package management capabilities which it feels like Python is
> missing. The key to their success? Both tools download a copy of the right
> versions of the right libraries, by default placing them in a special folder
> directly within your project's directory.

Isn't this exactly what virtualenv does anyway? Plus, with virtualenv you
don't have to muck around in your $PYTHONPATH and you can still choose which
Python version is installed for each virtualenv.

This article doesn't seem to explicitly mention any, but what are the
compelling reasons to use pip over pyenv/virtualenv for project isolation? I
guess using 1 tool to install dependencies and isolate your working
environment is handy. I'm unsure if the overhead of virtualenv is so high that
it offers a compelling reason to switch (at least for me).

------
publicfig
I'd hardly consider virtualenv with virtualenvwrapper "overkill". It's a
pretty elegant and incredibly easy solution to conflicting dependencies that
is capable of being used on almost any other environment easily.

~~~
fzumstein
Elegance is always a relative term, but one could argue that having 0
dependencies is more elegant than having 2 dependencies.

~~~
spelunker
Well 1 dependency, since this solution technically requires pip ;-)

------
gdamjan1
oh, it's 2015 and people still don't know about pep-370, PYTHONUSERBASE and
pip install --user :(

[https://www.python.org/dev/peps/pep-0370/](https://www.python.org/dev/peps/pep-0370/)

~~~
l00mer
That's for per-user isolation. This is about per-project isolation. A single
user can have many projects with different dependencies.

~~~
gdamjan1
that's what PYTHONUSERBASE (the environment variable) is there for!

~~~
l00mer
Ah, I hadn't thought of using it that way. Seems like a working solution, even
though it's not the problem that the PEP is meant to solve.

~~~
ehremo
But you may want to keep 3 installation levels: global, per-user and per-
project. If you use the second to achieve the third, then you no longer have
the second.

------
deathanatos
> forget about source env/bin/active and deactivate!

If the activate/deactivate bothers you, you can also just call the interpreter
directly: env/bin/python ; you can also call any pip-installed binaries
similarly. For example, env/bin/ipython if you have that installed. Or
env/bin/pip.

I have a small utility[1] to magic that to "vpython" and "vipython" and "venv
pip", respectively. (The last one is the general "execute this binary in the
virtualenv" form; the utility makes some assumptions about how you name your
envs, however, but also searches up the tree (I can cd into a dir, so I don't
need to do ../../env/bin/python… or source the env.) I believe there's another
implementation of this idea out there, but I can't find it at the moment.

(My vim is/was a bit python-heavy, and sourcing envs utterly breaks it if I
attempt to start vim with an env sourced, since it gets the right vimrc, but
the wrong pip installation)

[1]:
[https://github.com/thanatos/vpython](https://github.com/thanatos/vpython)

------
pbhowmic
I started using pip -t ever since I started coding for AppEngine. I prefer it
to a virtualenv becaus it works out a lot neater with the dev server. App
engine's earlier approach was to use either virtualenv or pull all the
dependencies' code down into the project directory but lately, the App Engine
team have started recommending using pip -t instead.

------
stefanha
Careful using relative paths in PYTHONPATH.

If you share a file system with untrusted users (like /tmp or an NFS mount) or
extract an untrusted archive (zip, tar, rar) then you might inadvertently
execute python code that another user placed there!

It's the same reason why you shouldn't use PATH=. in your shell.

This poor man's virtualenv doesn't seem like a good idea...

------
ptx
This is a very bad idea, for the same reason that relative directories are not
on the regular PATH on Unix: On a multi-user system, or even a single-user
system that sometimes has untrusted files on it, you can't safely use your
tools if they can be overridden by whatever someone happens to have placed in
the current directory.

For example, if you're looking through various home directories for some file,
you might be doing something like:

    
    
      $ cd /home/foo
      $ ls
      $ cd /home/bar
      $ ls
    

If your PATH contained ".useful_stuff/tools" and the user bar happened to have
a file called "/home/bar/.useful_stuff/tools/ls", that file has now been
executed instead of /bin/ls, with your privileges. Your account has now been
compromised.

Try this:

    
    
      [ptx@hn downloads]$ python3 -c 'print("hello")'
      hello
      [ptx@hn downloads]$ export PYTHONPATH="./.pip"
      [ptx@hn downloads]$ python3 -c 'print("hello")'
      You have been owned! Have a nice day.
      hello
      [ptx@hn downloads]$ cat .pip/site.py 
      print("You have been owned! Have a nice day.")
    

For this same reason, if you use Mercurial on a repository owned by someone
else, you get something like:

    
    
      [root@host foo]# hg stat
      not trusting file /home/ptx/foo/.hg/hgrc from untrusted user ptx, group users
    

Microsoft had a related security problem a few years back where they would
load DLL files from the current directory:
[https://isc.sans.edu/diary/DLL+hijacking+vulnerabilities/944...](https://isc.sans.edu/diary/DLL+hijacking+vulnerabilities/9445)

------
AUmrysh
This is a nice, simple solution to segregating python package environments. I
personally prefer virtualenv, as I can navigate around the filesystem and open
ipython from anywhere within the venv. I guess I don't see the usefulness of
it so much because I have no problem using virtualenv.

The relative path is an interesting trick, I'll have to keep that in mind in
the future.

------
erikb
Since the packaging wars end of 2012 to beginning of 2013 I was following
relatively closely packaging and requirements tracking, and let me say this:
We don't need another solution. We need the solution we have to work better.
Pip+Virtualenv is enough to learn and handle and nowadays works well for most
use cases. It's maybe not perfect but it gets the job done. Problems I still
see: Most of all integration with the system's package manager; pip3 install
-e will be overwritten by the corresponding pypi package if you increase the
version, although the -e installed version is already the required one; sudo
pip3 freeze doesn't work, at least on the current Ubuntu.

There are so many little details that need to be fleshed out in such an
infrastructure tool, we can't solve all these boring problems multiple times.

------
thefreeman
There is probably an obvious answer to this, but a cursory google search
turned up no answers.

The idea of having different multiple versions of dependencies on a single
system is not exactly a new problem, and is solved by _many_ other package
managers. Is there some inherent weakness in python modules that forces them
to be managed this way? Do the maintainers of pip insist on this for other
reasons?

It seems like one of the bigger complaints I hear about python so I feel like
it must be harder then it seems.

~~~
Galanwe
But this is exactly what virtualenv does...

1\. You create a virtualenv 2\. You enter the virtualenv 3\. You install
dependencies in it

Redo these steps for every isolated set of dependencies you want.

The solution proposed here does not solve anything. It only provides some
half-working equivalent of virtualenv that works with a hidden directory to
run your scripts...

~~~
pyre
The differences:

1\. Your 'node_modules' is automatically created by npm. No need for a
separate tool.

2\. You don't have to 'enter' an npm environment other than changing to the
directory where it exists.

These really aren't huge issues, but I can see it being an annoyance for some,
especially people coming from Node.js to Python.

~~~
reverius42
There is a more fundamental difference, which is what happens in a single
project with a diamond-shape dependency with version mismatches, like this:

    
    
        A -> B, C
        B -> D == 0.1
        C -> D == 0.2
    

This dependency graph is handled easily by npm and other tools (at the expense
of disk space, memory usage, etc.) and potentially small
incompatibilities/gotchas, but is completely impossible to satisfy with pip.

------
goblin89
IMHO complexity created by limited isolation facilities of virtualenv and the
likes isn’t worth the resources you save by not just having a VM for each app
you work on.

The complexity arises from differing system-wide dependencies that are out of
scope of various package managers or virtual environments, and from
development environment (assuming we’re addressing the need to have multiple
apps co-exist on one machine for development) not matching production.

------
fizixer
Not sure why it tries to replace virtualenv which is already obsolete in favor
of pyvenv which is built-in.

~~~
falcolas
Not everyone is off Python 2.x yet, for a wide variety of reasons.

------
wkornewald
Also have a look at vex:
[https://github.com/sashahart/vex](https://github.com/sashahart/vex)

Unlike this pip solution, vex extends your PATH to include installed scripts.
It also works well on Windows and is really easy to use.

------
personjerry
Isn't it rather dangerous to have a "transparent" version of virtualenv?

------
w_t_payne
This is somewhat similar to what I do ...

~~~
fzumstein
interesting! what do you do differently?

------
fit2rule
Packaging: if you have to do it, you should be building a bootable,
distributable, OS instead.

Doesn't matter how many times others learn this lesson, I personally see it
over and over.

