Hacker News new | past | comments | ask | show | jobs | submit login
My Python Development Environment, 2018 Edition (jacobian.org)
530 points by craigkerstiens on Feb 22, 2018 | hide | past | favorite | 219 comments



I'm a big fan of using Docker because for real world web app development, your app is often more than just getting Python and a virtualenv set up.

Earlier this week I wrote about the pains of setting up a Python development experience without Docker, and then compared it to Docker as well.

If anyone is curious, that's located at https://nickjanetakis.com/blog/setting-up-a-python-developme....

By the way, I would say Docker is anything but slow. I get near instant development feedback on my Flask applications, even when running things through Docker for Windows / WSL.

These are pretty big Flask apps too, which have thousands of lines of code, dozens of packages, tons of assets and require running Celery, Postgres, Redis, etc..


My problem with using Docker (only) is that it doesn't translate well to editors. Like, using jedi-vim[1] with a virtualenv constructed by a Docker container doesn't work at all. Unless I actually run vim itself inside said container.

So unless your dependences build on macOS (like in my case), everything goes out the door.

[1] https://github.com/davidhalter/jedi-vim


We've got about 30+ backends in python all wrapped in docker containers. Majority of the team was pure vim before I joined and they're slowly converting to pycharm after seeing how nicely you can setup a remote interpreter against a docker container. And it has vim bindings so you don't have to re-learn new hotkeys.

I've also been following this VS Code issue on adding remote docker support for python https://github.com/Microsoft/vscode-python/issues/79#issueco...

However, if you're a hardcore vim guy then I doubt these IDEs are gonna satiate your current flow.


I remember, on my laptop with pycharm and docker (and the virtual machine docker lived in), the RAM usage was just excessive. I am by no means a minimalist, but data-science stuff was barely impossible with 16 GB RAM.

Also, I strongly dislike PyCharm. I am a vim guy by heart, but I am generally not against IDEs. VS Code is okayish. For C++ development, I really loved Visual Studio. But PyCharm just feels wrong, bloated, slow and baroque


What I do is create the virtualenv locally that docker would also create, and point my editor to the local virtualenv. It gives me all of the intellisense locally. I still need to debug within the container/shell for the moment but I'm hoping there's a pathway coming for vscode that pycharm already has.


Funny you mentioned that issue. I'm eagerly awaiting that feature too.

I commented in the issue a few weeks ago: https://github.com/Microsoft/vscode-python/issues/79#issueco....

Once that's implemented, oh man, development nirvana.


yup. happy pycharm and docker user here. even have it working with debugging and breakpoints.


Ever heard about bind mounting? My current docker image for a Flask app doesn't even contains my code.


Are you mounting your virtualenv into the container somehow? How does your editor see the code/docs for your pip dependencies?


I have not tried it but I imagine this would work fine.

    Project
        Project/venv
        Project/src
And then have the whole project as a volume. Then your editor can see the same files as you have in the container.


Emacs has pretty nice integrations with inferior docker processes. I'm getting them via spacemacs, so I'm not sure what specific package provides it, but I assume it's elpy.


what kinds of interactions? (like: what operations, etc)

using elpy here with a native install and can use e.g. tramp for remote editing/python sessions in most cases (even across machines), but determining docker mount points to edit in-machine not so much..

I suppose one can simply map a local code directory into a runtime environment, but this also makes e.g. interacting with an in-container interpreter a bit bothersome (not so bad actually, but have to set a separate interpreter path to something like 'docker exec -it ctid python'

would like to track this down I suppose


C-c C-p will start an inferior python shell in the container, for instance.


Lots of people mentioning you can maybe debug a remote process, but none suggesting what to do about completions.

Maybe you can use bind mounts, but I'm not sure how well that works in practice, or with eg a package with a native component.


See my comment a few above. Setup a virtualenv locally that your dev env points to for completions, but actually run the app within the container (with its own virtualenv).

Docker is great for dependencies like databases and queues. I find it totally unnecessary for developing python.


>These are pretty big Flask apps too, which have thousands of lines of code, dozens of packages, tons of assets and require running Celery, Postgres, Redis, etc..

I was trying to create a new project last month trying to put all of these together. One thing that I was stuck in was putting these together:

- non-single-thread-Flask

- Redis pub-sub, subscribe to listen to a channel

- and websocket (such as socket.io) upon receiving redis sub message, pushing the message to client side.

I found out that this was a non-trivial thing to do, as redis subscribe is a blocking function, and I was not able to push websocket message from redis subscribe callback. I wonder how did they do that, if any?


In the past, when I needed websockets in a Flask app I used Faye (backed by Redis) for the websocket back-end.

It was really easy to set up with Docker as it just becomes another container in your Compose file.

Nowadays I would just use Pusher.


Fanout/Pushpin works here as does having an aiohttp wrapped WSGI Flask app.


Since switching to nixos, my Python development environment couldn't be more satisfying.

I use a default.nix file and a requirements.txt file and then with a single command I'm into a shell and virtual environment with all dependencies and packages installed, that I can easily transfer between machines.

That is unless I want to use PyQt5.


Nix is one of the most amazing things created lately. IMHO, it doesn't get the attention it deserves as it provides great solutions to really tough problems and it's ready for prime time.

A purely functional package manager, distro, devops. And pretty soon, home directory management. Maintaining servers or doing aggressive changes becomes very easy. There's even a Darwin (macOS) implementation now, so you can manage most of your Mac functionally (with heavy usage of defaults under the hood).

It's still lacking a bit on usability, as some things are not as intuitive as in a simple imperative distribution such as Arch or Alpine. Especially if you need to run some prepackaged software that assumes FHS and binds dynamically to some pre-existing dependencies.

But to me, right now if you are a moderately advanced user I don't see a point in running distros that are stuck in the middle (imperative, complex, lots of defaults). It's either one extreme (simple and imperative, e.g. Arch) or the other (functional, NixOS).


Excuse me if this question is too basic but how does a purely functional package manager work with a side-effectful package installation?

In Python (or basically any other language package manager) you can run arbitrary scripts (post-install etc) and so this doesn't lend itself nicely to a reproducible, functional approach.


A quick and overly simplistic explanation is that all inputs (source code, package dependencies or post-install scripts) used to build a package are employed to compute a unique hash.

Then the result of building a particular package is installed in /nix/store/hash-packagename. And this package links to other packages in the nix store using precise hashes. There is no dynamic linking. So the result is referentially transparent. A particular hash is guaranteed to correspond to the same package version, built in the same way and linked to the same dependencies. Furthermore, installing new package versions or modified versions of a package wont overwrite old ones, as hashes are different.

The same concept applies to a whole system setup, which is identified by a hash computed using all options that configure the system plus all packages available in your environment.


I'm a big fan apart from how they version. Depending on when you install the version will point at a different commit hash. In our case, all of our Haskell builds failed on all but 1 machine (day apart in resolutions). How nixos handles environments meant our large package builds would kill the entire os, or underlying lib linking just failed - never figured out why, but it wasn't worth investing more time.


This is one of the usability problems they have. There are some simple solutions to this, but it should be more straightforward.


Pinning is the answer to these usability problems.


Would you mind doing a writeup? More and more people are have mentioned their success in using nix, but from what I've seen, there hasn't been a huge amount of accompanying docs.


My understanding is that you should be using shell.nix for your dev environment, and default.nix if you want to actually build the final "nix-package". I have a gist that talks about this along with python and other language environments: https://gist.github.com/CMCDragonkai/dcc1b538352624ea690d695... However it's not finished, there are some things that still need work. However package/input isolation is not the only isolation you need sometimes (for example, network isolation, fs isolation...etc). When you do need this, Docker provides all these, but it's not compositional. There's a discussion on the Nix issues about this (potentially working towards a new nix-shell development environment can can compose different kinds of isolation): https://github.com/NixOS/nix/issues/903#issuecomment-3647447...


Where I can read about this? I wish to have a solution that replace dockers and globally installed stuff for me (I need to manage python, .net core, ios/android and several rdbms) on osx/linux. Is nixos the way?


I've been following nixos for a while. I think there is a 2.0 version worked on, I am waiting for it to be release then I'll dive into deeper.

On the surface from what I've seen it does all the things right and it is kind of what I expected Docker would before I used it.


Do you use NixOS in production?


I'm an educator, so my answer would have to be no, unless we're talking about producing educational material.

People do though.


What is the issue with PyQt5?


PyQT5 is actually packaged, and I was just being lazy in my comment.

It's actually pyqtchart and qscintilla that are the problems. They're not packaged for nix and won't install in a virtualenv with pip. Something to do with hardcoded paths for dependencies I think.

I'm going to have a go at writing a nix files that build them from source, with the help of a NixOS expert, so we'll see how that goes.


In the nixpkgs repository, sometimes you see certain packages requiring either patches to their source code or sed changes to config files because their build system was hardcoding paths that they shouldn't.


I want a GUI for polyglot local development for cloud-native apps with a variety of backing services that's as braindead and easy as *AMP apps were in the bad old days when you'd be slinging PHP, (plaintext) FTPing up to a shared host, working with guys for whom "server stuff" was pulling teeth and only having one version of Apache, PHP and MySQL to worry about (old terrible ones).

This seems within reach finally due to where were are getting with containers and orchestration. A bloated electron app plus kube in a VM maybe? Even that much RAM is still cheaper than my time.

Dropdowns for picking out what runtime, what language, what database, what cache, etc. picking where my code lives, then it boots everything up, solves a local hosts entry + ssl cert and keeps my code synced.

I should be able to huck my macbook in a wood chipper to protect my private keys from terrorists, unbox a new one, install my Jetbrains Toolbox, install this thing, and be back up and running fixing responsive text wrapping "bugs" on my marketing landing pages in minutes.

I should be able to hand my git repo to a potato whose wish to become human was granted by a fairy yesterday owing to their exemplary potato-like behavior and expect they can get the thing booted up and start blowing up my test suite and arguing with me about indentation in minutes.

I should be able to receive news of a cool new framework in a cool new language with a cool new runtime and a cool new database written by angels, etched into crystal tablets discovered in the martian polar ice accompanied with proofs that they are both feature complete and error-free then get going on using these gifts bestowed unto mankind in a brand new repo to write a microservice for filling people's inboxes with unsolicited promotions for male enhancement supplements in minutes.


I’m using Anaconda because it was recommended in a step by step tutorial for playing with deep learning.

What would be involved in removing it from my system and moving instead to this set of tools?

Not necessarily looking for s step by step answer, just for general suggestions.

My guess is: find out which python the deep learning tools are using, remove Anaconda, and reinstall the python version needed, using the tools from this post. I’ll need to read up on the tools too. Any pitfalls with this approach?


Anaconda does most of the stuff mentioned, and also makes it much easier to install packages based on C/C++ libraries (which most deep learning things are). So you're better off staying with anaconda. It's widely used in commercial data science projects so the idea that noone "takes it seriously" as someone else suggests is a bit silly. I assume they're thinking about a different context to data science projects.

That said anaconda does have a whole variety of extremely annoying quirks, like packages not being backwards compatible with old versions of conda, or conda going crazy and reinstalling itself, or the way the conda-forge repo has far more packages than the official conda repo. It's very far from perfect. But for data science I think it's basically the standard package manager in Python land.


> ... and also makes it much easier to install packages based on C/C++ libraries

I hear this often, though I cannot remember ever running into a pip package where this was an issue. Out of curiosity, could someone point me to a pip package and its conda equivalent where this is the case?


Install scipy on Windows and include the Intel linear matrix libraries.


The numpy+mkl wheel is available on Gohlke's site.


Try pip installing scipy or Numpy and you'll see the value of conda.


As if that still was a problem, now that we have wheels, and especially manylinux wheels.



Valid points. Still most of them are not issues in my daily work.

And by the way Peter, thank you for the amazing work you do.


Although I mostly use the system's numpy package (usually shipped with the distribution), I just tried installing it via pip and had no problem (obviously just one data point).


Just ran `pipenv install scipy numpy`. Installs like any other package..



I really like conda. It is easy to have multiple enviroments and minimizes the footprint by symbolically linking in what is needed.


Rasterio, fiona and anything else GDAL based always causes me headaches.


Anaconda was the competition pip needed to become good. When Anaconda was introduced, I (a pip person), was impressed. However, at work we had pip workflows that were working okayish, so I never made the switch.

Today, pip et. al. has so dramatically improved, that I hardly see a reason to use anaconda. I am not using deep learning stuff, so I cannot comment on this, but for most scientific python stuff (scikit learn, pandas, numpy, etc.). Pip and pre-built wheels work very well.

On my new job, I was given a Windows laptop that I happen to use now for my development work (because I am too stupid/lazy to properly configure and maintain a separate virtual linux machine). And I started with anaconda, assuming it was less trouble, but quickly ran into trouble. pip worked like a charm [].

[] Ironically pip is installed via the anaconda base installation if I am not mistaken :)


Nothing ironic.

Anaconda integrates pip, it doesn't not compete with it.

I find that Anaconda is the best among the virtual python envs I tried; It does as good or better job of separation and tracking installations as any, AND falls back to pip (with complete integration and tracking) when a package is not in the conda repos.

pip inside conda works better than pip outside in my opinion. I don't understand the general sentiment towards (ana)conda.


I could provide a vagrant based repo that lets you spawn your ubuntu VM for dev in minutes. I'll try to get it in github.


> like packages not being backwards compatible with old versions of conda

When new capabilities are added to conda, like the ability to support noarch python packages, you're going to need to be working with an up-to-date version of conda to use those packages. Just `conda update conda`.

> conda going crazy and reinstalling itself

Conda will pretty aggressively auto-update itself. Going crazy and reinstalling is a new one though. File an issue with details at https://github.com/conda/conda

> the way the conda-forge repo has far more packages than the official conda repo

Conda-forge is community driven and more "upstream" than the Anaconda, Inc.-provided 'defaults' repositories. Think of conda-forge like Fedora, and 'defaults' like CentOS/RHEL.


I think you could probably use both, side-by-side (but not at the same time/for the same project).

Anaconda might be nicer if the packages you need are C-based and would need compiling on your platform. Depends on your use-case. For experimentation/playing around, stick with what works for you. But if a project required me to install Anaconda, I wouldn’t take it seriously. And pipenv is pretty easy to use, too. So if you plan on distributing it or open-sourcing it, understanding how most other people manage dependencies ouside of conda is going to be useful.


Why not seriously?

Many data science environments are pretty much based on Anaconda installations.

(See e.g. these dockers: https://github.com/jupyter/docker-stacks/tree/master/scipy-n...)

I mean, all packages I know DONT require Anaconda. But if you need the whole environment, sometimes Anaconda is the easiest tool too install all dependencies.


> Anaconda is the easiest tool too install all dependencies.

For hobbyist stuff that’s fine, and I applaud lowering the entry barrier. My issue would be if I needed Anaconda to deploy the project/dependencies into “prod” in some professional capacity, instead of standard Python build tools. Having said that, I’m not terribly familiar with Anaconda, and it seems to leverage virtualenv under the hood, possibly with pre-compiled packages (like wheels?).


Anaconda does not leverage virtualenv under the hood. Anaconda does not use wheels as part of that. However, you can use pip and virtualenvs within Anaconda, but you'll probably get some inconsistent results.

The reason that Anaconda exists is to make it easier for people to have consistency between their dev environment, the build environment, and the prod environment. If you're seriously going to build all of the scientific Python stack from scratch, correctly and optimally, for your prod environment (and then also do the same for every dev environment you need to support), then you have way too much time and probably haven't actually tried doing it over any real period of time.

The problem isn't that it's impossible. The problem is that there are dozens of different ways to almost succeed, and you don't run into the problems until way too late.


> you can use pip and virtualenvs within Anaconda, but you'll probably get some inconsistent results.

I've never tried virtualenvs inside anaconda (what would the use case be? anaconda already provides a virtual environment)

pip is perfectly integrated within anaconda, in my experience; What inconsistencies are you talking about?


"Hobbyist"? It's widely used in research in data science & machine learning.

If you want a setup which works on various systems, and uses Python numeric packages, usually it is the most failsafe way to use with various OS. (Unless you want to put everything in docker.)

Unless by "professional" you mean "building", well - then you have a point.


Yeah. Unless "Hobbyist" now also means my research university's 104 node, 2548 core HPC cluster...


What do you think you'll get from any if these tools that you do not get from {ana,mini}conda?

I must be missing something, but all these use cases (and more) seem to have been covered by miniconda ages ago.

conda works with perfectly well with pip, and creates virtual environments that are at least (in my experience) as good as virtualenv does.


A few years ago Anaconda helped overcome install issues with some packages, but I almost never run into those problems anymore.

One thing to be aware of is that Anaconda modifies various Jupyter configs / installs some of its own kernels. So it can be a hassle to get back to system python + plain jupyter.

https://github.com/jupyter/notebook/issues/1630


Thanks for the replies everybody. One thing that's still confusing to Python tourists (my word for myself since I am usually programming in a different language, but come to Python occasionally to do something) is that everyone talks about pip, when actually it seems pip3 is required to install when using Python 3. Is this no longer the case? Or do people just say "pip" when they mean "pip3"? Or are people actually still living in 2.x land? I'm talking about pip the command line command/binary executable file, not pip the concept / tool name. Similar to the distinction between the capitalized "Python" (name of the language) versus "python" (command entered on the command line/name of the binary on the system).


Any of those could be the case. Guess it's understood that you will put a 2 or 3 on the end of the pip when you want to choose one or the other. And it goes further:

    /usr/local/bin/pip*
    /usr/local/bin/pip2*
    /usr/local/bin/pip2.7*
    /usr/local/bin/pip3*
    /usr/local/bin/pip3.5*
    /usr/local/bin/pip3.6*


Just in case you wish to retain anaconda, you could simply remove anaconda path from your PATH variable in .bashrc file (Assuming you are on Linux). If you need anaconda again, you could add the path to the variable, or run `source ~/anaconda3/bin/activate`


None of these should conflict with Anaconda (or at least Continuum conda, I'm not that familiar with what else is out there) unless you put the conda root bin directory on your path.


>Why? pipenv handles dependency- and virtual-environment-management in a way that’s very intuitive (to me), and fits perfectly with my desired workflow.

Why specifically do you use it instead of virtualenv (+virtualenvwrapper)?


Pipenv combines package management and virtualenv managment in one tool. You can create a new project as simple as this:

    $ mkdir myproj
    $ cd myproj
    $ pipenv --python 3.6         # This creates a virtualenv with Python 3.6 for you project.
    $ pipenv install flask        # This installs flask in your virtualenv.
    $ pipenv run flask            # This runs flask in your virtualenv.
    $ pipenv run python           # This runs a REPL with the interpreter of your venv.
    $ pipenv shell                # This opens a shell in your venv.
Pipenv replaces requirements.txt with Pipfile and Pipfile.lock. (Similar to what you get from npm or yarn in the JS world, and cargo in Rust)

If you use pipenv for a project, you no longer need to care about pip, requirements.txt or virtualenv.

Pipenv/Pipfile is the new standard recommended by Python.org/PyPA:

https://packaging.python.org/tutorials/managing-dependencies...

https://github.com/pypa/pipenv

https://github.com/pypa/pipfile


let me add, that if you do a virtualenv-based workflow, you will end up having some kind of a requirements list (requirements.txt), then you will realize that you want to also version the configuration of that virtualenv with all dependencies resolved (think `pip freeze`). You'll start to sort development dependencies like pytest from production requirements like `six`, by this time you will have written a few scripts to deal with this stuff.

This is where pipenv delivers. It is a destillation of best practices for virtualenv-configuration.

In your overview, I'd just add an example for a dev installation

    pipenv install --dev pytest


I was recently explaining this here — you still end up with a virtualenv so it's not a difference in capabilities but rather ease of use:

1. It transparently creates the virtualenv for you

2. The pipfile format handles dependencies and version locking (including hashes of packages), including updates. That means that the versions won't change without your knowledge but upgrading to the latest versions of everything is simply running "pipenv update" to have the virtualenv completely rebuilt (i.e. you'll never forget to add a dependency to a requirements file) and the lock file updated so the next time you push your code the same versions you tested are certain to be used.

3. It'll automatically load the .env file for every command – i.e. your project can have "DJANGO_SETTINGS_MODULE=myproject.site_settings" in that file and you will never need to spend time talking about it in the future.

4. It separates regular and developer dependencies so you don't install as much on servers

5. "pipenv check" will let you know whether any of the versions of any of the packages installed have known security vulnerabilities

6. Pipfile also includes the version of the Python interpreter so e.g. your Python 2 project will seamlessly stay on 2.7 until you upgrade even if your system default python becomes 3.

None of this is something you couldn't do before but it's just easier. Every time a Python point release happens you have to rebuild a virtualenv and now it takes 5 seconds and no thought.


To further elaborate on 2, it solves the problem of maintaining loose version ranges in your requirements.txt file, but keeping the versions pinned when you deploy. For example if you put `foo>=2` in your requirements.txt, this is dangerous without some way of pinning e.g. `foo==2.18.2` and running your tests against that before you deploy. But you obviously don't want to manually edit requirements.txt with minor version numbers every time you update. In the past I've maintained a separate file with loose versions and then updated packages with

  pip install -r requirements-to-freeze.txt --upgrade && pip freeze -l -r requirements-to-freeze.txt > requirements.txt
Pipenv makes this much nicer.


Don't forget that usually you'll start to sort your requirements into dev requirements and production requirements which makes these packaging scripts much more complicated.

https://github.com/jazzband/pip-tools would be what I used before pipenv came to be.


Two features I miss from pip-tools:

1. `pip-sync`, An easy way to ensure my local environment actually matches my defined requirements. I guess the pipenv version of this would be `pipenv uninstall --all && pipenv install` which isn't quite as elegant, but perhaps good enough.

2. The ability to create more than two requirement sets. For my projects it's often handy to three sets of requirements:

• Normal production requirements end users will need to run the app

• CI requirements needed for testing, but not running the app in production (Selenium, Flake8, etc)

• Local debugging tools (ipython, ipdb)

I could include my local debugging tools in the `--dev` requirements, but then I'm unnecessarily making my CI builds slower by adding requirements for packages that should never actually be referenced in committed code. Alternatively, I could leave them out of my dependencies entirely, but then I have to remember to reinstall them every time I sync my local env to ensure it matches the piplock file.


  pip-compile --upgrade
  pip-compile --upgrade-package
are also necessary features to quickly track your dependencies (and transitive deps).

pipenv uses pip-tools, but they haven't exposed these features as far as I can tell.


pipenv is basically a wrapper around virtualenv (similar to virtualenvwrapper), but also provide other features (like deterministic builds). It has replaced pip, virtualenv and dealing with requirements.txt files in my workflow. More information is in their docs: https://docs.pipenv.org/.


It was easier for me to setup and maintain multiple virtualenvs. It automatically checks for security vulnerabilities. The docs at http://pipenv.readthedocs.io/en/latest/ are worth a read. It might be a bit more opinionated than some folks are used to.


pipenv combine pip and venv. It's not just about activating. If you install, it will create the virtualenv if it's missing.

It also, like pew, opens the virtualenv in a new shell instead of activating the current shell. A much saner approach.

The UI is also more user friendly: one entry point for everything, pretty colors and icons, auto-correct of package name, and so on.

Using Pipfiles, instead of requirements, are generally a better experience than requirements.txt since it contains dev and prod dependancies and allow separated dependancy pinning, with file hash.


Does it install and manages different python versions? Pyenv does


If you have pyenv installed it can use it to install the correct python version defined in the Pipfile.


pipenv has Pipfile support, which is nice because you don’t need multiple requirements.txt floating around, and pinning is way nicer. Apart from that, it isn’t unlike virtualenvwrapper, but it’s way less hacky. Works on non-bash compatible shells and Windows, it loads .env files automatically, and it’s generally a pleasure to use. Highly recommended.


I just have latest Python installed and the nice support on Visual Studio.

https://www.visualstudio.com/vs/python/


This, me too.

I never understood the need for virtualenv and similar. Do people really encounter trouble with conflicting packages that often? I try to write scripts so they run on different versions of python anyway, unless there is a very specific reason why that is not possible; and even then you can run python versions in parallel on a Debian/Ubuntu box, with different pip installs for each of them.

As for production, I usually ship things in a Docker container anyway, so there is no chance of mismatched libraries.

I guess I just never saw a problem that virtualenv solves.


A rather simple practical example I'm working on right now: moving a Django app from 1.9 to 2.0. There are a few code breaking changes in 2.0. All I need to do is create a new virtualenv with Django 2.0 and I can just switch between both versions simply by changing the environment. I may even need to change some further packages that do not work with 2.0 yet and swap them for something else. I can do that in the new env. Then, when all is working, I export a requirements.txt from my virtual env and can quickly set up the very same virtual env on my production machine.


I've found that the library dependency issues appear in direct proportion to the number 3rd party libraries you use. If you're mostly developing against the standard library with a handful other stable libraries, it's not an issue. If you're developing with a requirements.txt (or pipfile) with 10 or more entries, you'll start running into conflicts on a regular basis.

It can also occur as projects age, especially with 3rd party libraries who don't provide API backwards compatibility (which I fully acknowledge is a PITA to develop for, more effort than is frequently justifiable).

Both of these are why I prefer to use the standard library when possible. It's going to remain API stable for a very long time, and the occasional 2-3 lines of boilerplate to do HTTPS requests (and other similar convenience functions) is a cost I'm willing to bear for that stability.


I think pretty much everyone would "prefer to use the standard library where possible". But try building a modern web app with that. No one uses libraries because they want to, they use b/c they need to.


> No one uses libraries because they want to, they use b/c they need to.

I'll have to respectfully disagree with this one. Everyone pulls in requests the moment they have to make any kind of http request for the sole reason that it's more ergonomic, not because it's "needed".

And requests brings in 4 of its own dependencies. Right there you've created a prime chance for everything to go sideways (and I've watched it happen explicitly with the requests library as it bolts on more and more ergonomic features).

For what it's worth, the last web app I built had a DB library from the OS vendor, flask, and gunicorn. All of which, since they were quite stable, never introduced library conflicts.


I experience version conflicts all the time. I work on multiple projects and virtual envs are life savers.


> Do people really encounter trouble with conflicting packages that often?

Yes it's quite common. For example I work on multiple projects with different versions of Django being used in each one.


The linked-to essay describes several reasons why the author uses virtualenv. Which of them don't you understand?

My package supports several optional back-ends, selectable at run-time, and it runs under Python 2.7 and 3.6+. I use tox to manage different virtualenvs for the combination of {backend X but not Y or Z, Python 2.7}, {backend X but not Y or Z, Python 3.6}, etc. for Y, and Z, as well as {backend X and Y and Z} for the two versions. Oh, and I support several releases of each of the X, Y, and Z. That's a lot of virtualenvs.

Usually I only develop on the X+Y+Z version because full test suite across all the combinations takes about 15 minutes.

I also do coverage testing, and the Python coverage tool isn't hard to use under this setup to combine tox results across multiple virturalenvs.

I don't know if Docker is better. I've been using this setup for some years.


Docker containers and virtual envs are in some way both capsules that can serve a similar purpose. virtual envs are in general more lightweight, but limited to the Python world, whereas Docker containers are heavier, but enable you to also include other dependencies in that capsule.

I sometimes feel that Docker containers are a bit misused if they are just used as environment capsules. They provide more isolation (process space, etc.). But of course, that would be still fine.

The software I write mostly does not require databases and other supporting services to run on my laptop. So virtualenvs are all I need (most of the time). There is a docker tax that I don't want to pay, unless I heavily benefit from it.

I guess, if I could deploy docker images, the return of invest would be much better and I might use it more.


Do you just use one environment for every project?

That gets messy. What if you want to setup a new machine? Or a new employee's machine? Etc..

It isn't only about conflicting packages.


Yes - but then again, I don't have that many projects and I tend to use the same libraries across them anyway.

Installing on another machine is just a matter of pip install -r requirements.txt, if it's for a dev. other wise it's just a docker run ....

I do know that most Python devs use virtualenv, I just never understood what all the fuss is about. Of course I don't mind if the others in team use it, I just never saw the appeal so I'm curious what other peoples' reasons are.

Maybe I don't see the need because I came to Python quite late, a little before Docker came around, and we were early adopters for Docker. So we solved testing and production with Docker (and thus never needed to support multiple Python versions in parallel). It's really one of those small mysteries I don't understand. :)


Yeah I understand what you mean. I have around ~12 projects, for different clients, that are just recent enough that I might need to access the environments.

If that was one environment (forget some of them being stuck on 2.x) the number of damn dependencies would be monstrous!

When I need to bring a partner onto the project, I can't give them a requirements file that's 5x what it should be!

> I just never understood what all the fuss is about.

It's to keep environments clean + easy to maintain. Virtual is probably a bad word for it. It isn't like Docker or a VM.

I don't think too many Python projects that are not libraries support many versions, that isn't the point of virtualenv or conda anyway.


Yes, it is very common. I work a lot with ML code released as part of research papers. Everyone uses a different version of TF/PyTorch or whatever framework they are using. I can't imagine working without something like virtualenv (though I use pipenv)


I use virtualenv so I can just easily `pip freeze > requirements.txt` for individual projects.


pipenv is basically the next level of this process: it records every install and locks only the things you installed rather than their dependencies which often change over time.


I'm a bit confused by this statement, do you mean:

I install foo 1.0, which depends on frob >= 1.0 (which happens to be frob 1.1 when I installed it). Foo 1.1 comes out, as does frob 1.2 and 1.3. If I reinstall from the pipfile, do I get foo 1.0 with frob 1.3?

I ask because that sounds like a bug waiting to happen. IMO, frozen requirements should remain frozen.


Here's what happens:

You type `pipenv install foo`. It will create the virtualenv if necessary and adds `foo = "​ * "` to the packages section of the Pipfile. The Pipfile.lock file will add a section for the package you installed _and_ all of its dependencies, including the hashes of the downloaded packages.

That avoids accidental breakage if the package you depend on doesn't set their versions correctly but it also means that your main Pipfile documents the things which you intentionally installed so a year from now you're not wondering why all of your servers have frob 1.2 installed, which only works on Python 2.7, even though nothing you're using now depends on it.

As a concrete example, here's what `pipenv install requests` looks like in a clean project:

Pipfile:

    [[source]]
    
    url = "https://pypi.python.org/simple"
    verify_ssl = true
    name = "pypi"
    
    
    [packages]
    
    requests = " * "
    
    
    [dev-packages]
    
(had I used `--python $(which python2.7)` it'd have recorded that as well)

Pipfile.lock:

    {
        "_meta": {
            "hash": {
                "sha256": "a0e63f8a0d1e3df046dc19b3ffbaaedfa151afc12af5a5b960ae7393952f8679"
            },
            "host-environment-markers": {
                "implementation_name": "cpython",
                "implementation_version": "3.6.4",
                "os_name": "posix",
                "platform_machine": "x86_64",
                "platform_python_implementation": "CPython",
                "platform_release": "17.4.0",
                "platform_system": "Darwin",
                "platform_version": "Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 2017; root:xnu-4570.41.2~1/RELEASE_X86_64",
                "python_full_version": "3.6.4",
                "python_version": "3.6",
                "sys_platform": "darwin"
            },
            "pipfile-spec": 6,
            "requires": {},
            "sources": [
                {
                    "name": "pypi",
                    "url": "https://pypi.python.org/simple",
                    "verify_ssl": true
                }
            ]
        },
        "default": {
            "certifi": {
                "hashes": [
                    "sha256:14131608ad2fd56836d33a71ee60fa1c82bc9d2c8d98b7bdbc631fe1b3cd1296",
                    "sha256:edbc3f203427eef571f79a7692bb160a2b0f7ccaa31953e99bd17e307cf63f7d"
                ],
                "version": "==2018.1.18"
            },
            "chardet": {
                "hashes": [
                    "sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691",
                    "sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae"
                ],
                "version": "==3.0.4"
            },
            "idna": {
                "hashes": [
                    "sha256:8c7309c718f94b3a625cb648ace320157ad16ff131ae0af362c9f21b80ef6ec4",
                    "sha256:2c6a5de3089009e3da7c5dde64a141dbc8551d5b7f6cf4ed7c2568d0cc520a8f"
                ],
                "version": "==2.6"
            },
            "requests": {
                "hashes": [
                    "sha256:6a1b267aa90cac58ac3a765d067950e7dbbf75b1da07e895d1f594193a40a38b",
                    "sha256:9c443e7324ba5b85070c4a818ade28bfabedf16ea10206da1132edaa6dda237e"
                ],
                "version": "==2.18.4"
            },
            "urllib3": {
                "hashes": [
                    "sha256:06330f386d6e4b195fbfc736b297f58c5a892e4440e54d294d7004e3a9bbea1b",
                    "sha256:cc44da8e1145637334317feebd728bd869a35285b93cbb4cca2577da7e62db4f"
                ],
                "version": "==1.22"
            }
        },
        "develop": {}
    }

(EDITED: the HN Markdown parser appears to be a simple regex match and breaks formatting with a * and uses only the ASCII definition of whitespace so I couldn't use a zero-width space. The real output doesn't have spaces around the asterisks).


How does it compare to PyCharm?


It's on par, as long as you don't do too many refactors on nasty codebases. Now, PyCharm chokes on those nasty refactors too, guessing the right stuff only 70% the stuff in a way that sometimes feels purely random, but at least its find and refactor preview UIs save the day sort of...

One thing that annoys me on VSC is that some operations (jump to defnition etc.) have delays on larger projects, because it does not create an "index db" of your code behind the scenes like PyCharm does, so it can show you usage and search for stuff practically instantly.

Overall VSC is great for developing Python on not-huge and well-behaved projects. For degenerated overgrown messes it's not a good solution :)


Just an FYI..

The parent comment was about Visual Studio(https://www.visualstudio.com/vs/python/), and you seem to be commenting about Visual Studio Code(http://code.visualstudio.com), which are 2 totally different products.

Microsoft has terrible naming practices.


Visual Studio Code is not the same as Visual Studio, which is what the op was referring to.


Type hints have made PyCharm an even more amazing IDE. Start using them if you're on a version of Python that supports it! :)


just for curiosity, how you type an equivalent of a ts union?


Having worked a few years on both, they're both great.


Thanks pjmlp!

Just a note of clarification. There are two "visual studio"s:

https://www.visualstudio.com/vs/python/

This is the classic Visual Studio & runs on Windows only, is a full featured IDE and has goodies like mixed mode Python/C++ debugging.

https://code.visualstudio.com/docs/languages/python

This VS Code, a cross-platform Editor++ with Python support.

Both are being actively worked on and will see continuous improvements.

[disclaimer: manage the dev team]


I prefer virtualenv + pip.

I primarily develop on Fedora which ships both Python 2.7 and Python 3.6.

pip and virtualenv are built into python3, no need to install anything. Additionally, these tools + tox are commonly used for python testing in popular python projects [0]. I have found other colleagues that use other tools struggle with pip and virtualenv, and it puts them at a disadvantage when it comes to working with the larger python software community IMO.

[0] https://github.com/pallets/flask/blob/master/.travis.yml


I have added PyInstaller (http://www.pyinstaller.org/) to my toolchain recently for working with Python app distribution. It gets me pretty close to the Golang single distributable executable ideal... the main issue is needing to build on each target OS, which kind of sucks but I can deal with.


Yes, I am also using PyInstaller for an infrastructure automation CLI project I am doing. Works really well to produce the single executable. Also for most of the automation work, you don't really need Go's speed. Python's convenience makes iterating a lot quicker.


I've used pyinstaller on Wine in order to make Windows executables from Linux. Haven't figured out yet how to cross-compile for OSX yet, but that at least reduces it from requiring three major OSes to build to two.



Other than promoting Python and "diversity", what exactly does PyBee/BeeWare do? It sounds like an SDK, but it appears to just point to a bunch of random GitHub projects. It's not very clear to me.


After some digging I've still no idea if it solves my core problem of needing a Mac to build Mac binaries.


We have low interest in Windows support for our software so I haven't even tried yet but WINE is a good idea if it works. For MacOS a TravisCI Mac build should do the trick. It'd be nice if it could all be done in a single run..., but it doesn't seem possible.


> I use multiple OSes: macOS at work, and Linux (well, Linux-ish - actually it’s WSL) at home.

This is pedantry, and the article is otherwise quite informative, but describing WSL as "Linux-ish" is like describing a speedboat as "car-ish" because they both happen to have steering wheels.

WSL is a bloody marvel of engineering, but it is in no way an equivalent of Linux. I'm mentioning this because WSL proponents and detractors tend to miss the fact that understanding those differences is critical to understanding and using--even in trivial ways--WSL itself.


What seems to be missing in the comments here is the `--user` option to pip. Lets you install modules on a per-user basis, doesn't mess with system python. All you need to do is add the bin folder this creates to your path.


pipsi installs into isolated virtualenvs, then symlinks into `~/.local/bin` just like `pip install --user`. Combining pipsi for cross-project tools (tox, twine, nox, etc.) with pipenv for project-specific packages is all you need. You don't need to `pip install --user` some package at that point.


I didn't manage to get a run at pyenv, pipsi or pipenv, but for virtualenvwrapper and tmux work great together.

I'm running linux at work and usually keep it alive for days, so tmux is used for session keeping and remote work. I've created a few bash scripts which run some tmux commands for setting up the layout as I want it and also execute "workon" for the specific virtualenv.

For working on a new project, I've another bash script which I run with the repo url and automatically clones it, creates the virtualenv with the same name as the project, installs dependencies and starts a new tmux sessions with two panes in first window and a second window for other stuff.

The great thing about tmux for me is it's low memory footprint so I can have 10-15 sessions running at a time, without worrying about the computer slowing down. What takes a bit too long is setting it up again upon a restart.


>What takes a bit too long is setting it up again upon a restart.

Have you tried this tmux plugin? https://github.com/tmux-plugins/tmux-resurrect I use it all the time at work and at home


Is there a command needed in pipenv like the one needed in virtualenv? eg.

  > source env/bin/activate
How does one activate one environment over another?

Why is pipsi a separate thing?


Based on install.rst[0]

It seems it keeps a ledger somewhere (haven't dug into it). Then to run commands, instead of using `python main.py`, you now use `pipenv run python main.py` and it automates things. It still depends on Virtualenv.

As an alternative, Pyenv + Pyenv Virtualenv work by creating the environments in a separate folder. You can then `cd` into a project root folder and there use `pyenv local x` and every time you `cd` into the directory or a subdirectory, it looks up the tree until it finds a `.local` file. This specifies the environment. It can be a Python version or a Virtualenv and it loads it.

[0] https://github.com/pypa/pipenv/blob/4f2295a1dbf7fe6fa36ef4ec... [1] https://github.com/pypa/pipenv/blob/4f2295a1dbf7fe6fa36ef4ec...


when you want to run a command 'inside' the pipenv of the current directory, do:

    > pipenv run {command}
This mirrors how npm works.

There's also:

    > pipenv shell
to give you a shell in which the environment is setup correctly for you


You just cd into your project's directory and run "pipenv shell" and it activates the virtualenv for you.


pyenv/pipenv are for your development. pipsi is for installing Python tools/applications that you just want to use, like apt/yum.


This would have been useful if he compared them to virtualenv, which as far as I am aware is still the "standard" way of managing python environments.

I tried virtualenv wrapper a while ago, and that was basically just another set of commands to do the same thing as virtualenv. Having already learned virtualenv command that gave me no real advantage. Are any of these tools mentioned any different?


Does virtualenv handle different python versions? My understanding was that it just handled different sets of 3rd party packages.


Yes, you can make virtualenvs with different Python interpreters.


Every time I use Python I miss NPM and package.json.


Every time I use JavaScript/NPM/WebPack I miss my sanity.


last project i setup parceljs with yarn and is so easyyyy.. :) nvm is my day saver :)


LOL, hits too close to home.


You miss a package manager layered on top of the one already in your OS, one that's seemingly run by people with no clue what they're doing, is rife with security issues and can break your entire system as we saw today?

I have no clue why a sane person would run npm.


And yet millions do?


Millions of people believe in chakras, horoscopes, and homeopathy - argument from popularity is a logical fallacy for good reason. Consensus does not equal quality.


And those same millions have been halted simultaneously in development around the world by the removal of key packages due to namespacing issues. Or, today, we find that running the newest version of npm under sudo can rewrite file system permissions across the entire hierarchy.

What was that you were implying about how the choice of millions is probably pretty good?


Millions of people write their passwords down on little slips of paper next to their monitor.

Are you implying that because something is popular, it's therefore "good"?


pipenv is like npm for python and Pipfile works similar to package.json. There's even a Pipfile.lock


pipenv is more like yarn (same version locking concept)


If I remember correctly the newest npm does the same though.


Yeah, npm and yarn are pretty close at the moment. Yarn had a big advantage when it was first release, but npm seems to have caught up.


In what way is NPM and package.json different from using pip and requirements.txt?


npm installs to a local node_modules file by default, whereas you have to set up a virtualenv or something similar to get that from pip


My understanding is that's what pipenv gives you. You call pipenv install, it creates a venv if necessary and fills it. If you want to run tools within the env, you call pipenv run {command}.


Yes, you're absolutely right, I was comparing npm specifically to pip (without any additional tools like pipenv).


Why? What NPM can achieve but pip cannot? You can export requirements.txt easily. NPM downloads some many seemingly useless stuff, makes the node_modules directory so bloated, pip is like a breeze in comparison to that.


Hell, prior to pipenv being a thing, Python package management made me miss Maven of all things!


Not having done anything in python in a few years, it is nice to learn that it's catching up. It may be hoped that the npm-bashers will learn to appreciate the new features available for python.


For that flavor, maybe try:

    python3 -m pip install -t .pip ...
    export PYTHONPATH=".pip:$PYTHONPATH"
Use whatever directory name you prefer instead of .pip. Mucking with PYTHONPATH is a bit dirty.


Seems like you could get all this done with conda. Just a thought.


I'm all for trying new things, but yeah I did have a similar thought. Conda is a good general-purpose tool that seems to check all these boxes and I've never had any serious issues with it.


as someone who has to manage projects for multiple clients, I find docker + docker-compose to be the best solution. The overhead of docker is totally worth it because the container separation makes life so much easier.

all I need to work on a client project is basically:

cd /path/to/project

docker-compose build (just once)

docker-compose up


Absolutely. Though i have yet to have the chance to use it at work (large enterprise), it has totally been a game changer for me in terms of creating side projects in dev and seamless deployment. There is definitely some overhead and not everything plays nice all the time, but the thought of going back to running 5+ terminal windows / installing databases globally makes me shudder.


Forget python development, why is it so damn hard to USE python scripts in a version-correct way in the first place? I have python scripts in my system which rely on different python versions. And since I have a default python version, 2.7 (through symlink or env or whatever), all the python 3 scripts fail until I switch the default version manually (due to library dependencies, print format, etc.). Why don't these scripts add "#!/usr/bin/python3" at the top? And why is it so hard to just have 2 versions co-exist? I MUST be doing something wrong here?

EDIT: "python3 <script.py>" doesn't always work because some scripts are written in bash and they call python within the script.


Pyenv looks great! Sad I didn't know about this sooner, though I do have the luxury of using mostly one version of Python and have only confused it for the system Python once or twice.


Pyenv is great. On our Macs we've had zero issues installing older, specific versions of Python, every time. Highly recommended. (Getting it working properly with zsh was a bit frustrating, but that's my own fault.)


Do you know if it's easy to start using pyenv with existing projects, or should I wait until my next de novo project?


It’s pretty easy if you have a set of requirements handy and have `pyenv-virtualenv`:

    $ pyenv virtualenv 3.6 some-name
    $ echo some-name > /path/to/project/.python-version
    $ cd /path/to/project
    $ pip install -r requirements.txt
That’s it.


Curious here... why would you use it over just venv (assuming python3)?


Pyenv is a different thing than venv, it manages versions of Python itself rather than isolating dependencies for projects. You're probably confusing it with pyvenv, which is Python 3.4+'s command-line interface for managing venv-compatible environments.


pyenv is really great. The ability to switch environments merely by cd'ing into the project directory is a life saver.


This is moving a bit tangentially to the topic at hand but I love pyenv (and rbenv) for the same reason.

My question is now that I'm having to do more PHP development again: is there anything like this for PHP? Last time I looked it seems there were a couple attempts (including phpenv[0]) but that they never caught on or were abandoned.

Is there something like this for PHP? If not, any idea why not? Is phpenv so stable that it hasn't need to be touched in 5 years?

[0] https://github.com/phpenv/phpenv


The first requirement in the article:

> I need to develop against multiple Python versions, including 2.7, various Python 3 versions (3.5 and 3.6, mostly), and PyPy.

The article states that this is an "unusual" setup, but unfortunately it is all too common. The Python 2/3 split has created such a large and sad schism in the development community, has wasted countless developer hours, and has held back the language itself incalculably. A travesty.


Virtulenv + Pycharm, 3 years, happy as ever. For ML/DL stuff, Conda is also highly recommended.


same here. Also I'm quite happy about the level of integration between pycharm and frontend things (eslint, babel, react...) I never have to switch editors again. I trend to have a bash script for sourcing node versions... setting the correct path for node binaries, activate the env... and a makefile for building dockers, I try to mantain all the tooling reproductable on the project source.


It it considered best practice or advisable to run production deployments in a virtualenv? I have always considered it to be a tool for managing multiple development efforts on the same machine, not as production environment management.


Many prefer it, though it may be considered redundant in a container.


Would it be easier to use individual docker containers, each their own python environment and then have your source directory mapped to a docker directory?


I guess, if you are cool with sharing the images when you need to share the environment. I think I just never liked the "bulk" that comes with Docker, though it has gotten better.

I think Docker is cool in general but for other stuff than this specific use-case.


i use docker for all the related services.. postgres, redis, solr, zeo... but i use virtualenv and pip. I also use setup.py for declaring entrypoints and coordinate all the services (i don't like too much the docker compose aproach). also i'm using docker.py and boto3 for the devops part.


pipenv makes it pretty trivial to get the same effect but without Docker. It also means that your favorite dev tools have easy access to the code you're working on.


Thank you St. Patrick. I recently tried to install an ancillary python based cli tool and realized my python dev environment on my mac is fucked.


I haven't had to write python in awhile but are there reasons to use virtualenv etc. instead of a docker container?


Yes: Docker adds an enormous amount of overhead. Instead of your Python code you now have to manage a second operating system for the host (including things like not being able to debug or edit directly, having to debug networking, etc.) and whatever OS + deployment is happening in the actual container.

For deployment, Docker solves a lot of problems but there's a significant cost for local development.


I actually prefer to use Docker, especially on projects with multiple people. The amount of times people shipped code without adding dependencies, or on-boarding new people who wouldn't read the docs to install system dependencies was astounding. Now it's as easy as install Docker, and running `docker-compose up` and since our CI and servers use the same images it's virtually guaranteed to work. I've also noticed a huge productivity leap being able to use sql serves as containers rather than installing them on the system! so much easier to manage!


> install system dependencies

This is the important point of Docker. If your app has dependencies that are outside of the Python ecosystem (especially the pita ones to install) Docker seems like an excellent solution. I have an app that requires Oracle drivers and some other binaries with custom compile options (not available through apt) on Linux and I really wish I had built it in Docker.


Less of a pain in the ass for rapid iteration, zero overhead, fewer dependencies = less complexity.

Virtualenv is literally a set of shims in your $PATH and some tooling to manage those sets. Pyenv is the same thing, but extends the concept out to your actual Python interpreter and lets you rapidly switch between them.


I wish you weren't getting downvoted for this. docker-compose is honestly no more complicated of a thing than virtualenv to use.

That said, main reason to do it is that python is a language of idioms and this is one. It's easier to take the trail than break your own.


Agreed. I don’t see how Docker is slower once the compose environment is setup (especially if you’re running Docker in Linux), and proper volume mounts are made to the source files. In fact, I would say developing with Docker will help with deploying to production, because you’ll be able to see how the app will work in production.

I can see Docker being slower if virtualized, but on Linux, it’s just a fancier BSD jail, no?


I find virtualenv much easier to manage and use than docker. Unless you're changing dependencies and versions multiple times over a day, it's not a pain.


Docker is SLOW to build the image. Virtualenv is great for development.


But you do not have to build the image while developing (in Python). Just map your project into the image using `-v`.


Why are you building the image so much? Just mount a pre-built image


Yes you don't need a whole linux distro for what's simply a set of libraries and a binary file

Docker container is just the lazy solution


Mine is comparably simpler. Emacs + Elpy, and I just pip and pyenv for managing packages. Granted I don't deploy


I don't understand this at all. It's 2018. My dev. env. for <insert thing> is some text editor that knows how to "jump to definition" and "find all usages" (this is sometimes referred to as "IDE"), and a bunch of Dockerfiles to build and run the tests.


Why is being 2018 relevant? Use whatever works for you. Your environment sounds hip and I'm sure you'll move onto whatever fashionable container platform and "some text editor" that is available next, but I prefer finely tuned tools that work for me.


So you're using Docker instead of virtual environments? I feel there is much more overhead with Docker, but I could be wrong. Maybe I need to try it out again!


No, it's pretty much the same as you remember it. Using docker instead of just a normal virtualenv is overkill.


Overkill - Yes and No. Yes, because you are spinning up a full VM to run a docker container locally. No, because the container is also the container run in prod, with no opportunity for some other process to come along and hose your otherwise clean install.

The process and FS isolation also make sysadmin-me all tingly inside. That way you can't hose up anybody else's clean install either (even if you're compromised).

As a side note, on mac, the VM that runs docker containers requires less ram and CPU than Hipchat. Go figure.


I'd never deploy more than 1 client to a machine anyway, so isolation in a security sense does not make much sense to me if I'm being honest.

But I understand. If the workflow works for you and/or your team, well, what's the problem? It's working!


And virtualenv can be overkill unless you've got multiple clients or legacy commitments.


Or you don't want to hose your system python, or you don't want to shit all over any other work you may be doing, or you want to ever test on anything other than a single (probably polluted if you're screwing around using your system python for everything) environment, or you're a professional using python, or if you want to be taken at all seriously....the list goes on! And yeah, using your system python means you're either not doing any professional/serious work or you're a hack who needs to stop doing anything professionally/seriously.


Never happened in twenty years.

Manually installed packages are already segregated and can be uninstalled ya know. Might be a problem if you have no admin skills.


Way to miss the point!...all of them! Nobody said anything about manually installed packages being unsegregated or immutable, so let's not put up any more straw men. It's just a stupid way to work.

I think you may be the one without admin skills if you think working on your system Python is anything except reckless. No one cares that you have 20 years of doing it like that, it's a poor argument and shows up a really bad attitude.


Your appeal to absolutes reeks of immaturity, lack of experience, and cargo-cult reasoning.

On the contrary, if it rarely to never happens compared to the effort involved, it's not "reckless" at all, but a tradeoff. The site package folders are a simple path of files/folders, not hard to reason about. Nothing gets "hosed" without your participation. No need to live in mortal fear. In other words the cure is as bad as the disease.

The truth is that venvs are a hack with a lot fewer use cases now that user packages at the low-end and containers at the mid/high-end are now ubiquitous.

Honestly newbies would be a lot better off just avoiding venvs entirely. I make an exception for pipenv when appropriate since it hides the complexity as well as possible.


yup, same here.


Im old school, i just install xampp and im ok.


This seems more like the 2008 Edition. You have to be pretty stubborn to not adopt Docker in 2018.


If you're building a native python application, there's no reason to adopt yet another technology.

A native python app in "Docker/Containers" is going to look like a dockerfile that includes a copy of the app and runs three commands. Either you build that on the server (what's the point) or use a registry (additional complexity for little benefit).


I've been using a similar setup which I found online [0] when I was looking for a way to have multiple Python versions including working Jupyter notebooks etc.

It's been working great for me. Pyenv and virtualenvwrapper are really good. I'm not sure why this one needs pipsi, though. You can install CLI tools for both Python 2 and 3 using just Pyenv as demonstrated above.

[0] https://medium.com/@henriquebastos/the-definitive-guide-to-s...


there are way too many python dep/env managers/things

pipenv

pyenv

mkvirtualenv

virtualenv

pipsi

venv

pew

conda

virtualenvwrapper

i'm sure i'm forgetting like 5.

this is like https://xkcd.com/927/

for the record i use pyenv and virtualenv (although playing with ML i'm using conda)


As a Python developer, I just stay with pip and virtualenv, but I share the smh/wtf sentiment here. The data science folks enjoy conda but I don't see that being useful to me anywhere else. I feel like every other year someone will invent a new pip env.

pyvenv (see comment section, edited) is now deprecated and venv is recommended (shipped as part of Python 3.6 installation) is another confusion. Lest not forget the confusion of distuilts and setuptools is like argparse vs optparse in the past (which are both horrible). The experience of using pip and pypi (now Warehouse) is much better than that of Ruby and of NodeJS, but these "2-in-1" tools are just ridiculously "creative".

As always, pro-tip: consider using the following to ensure environment is loaded properly when you are deploying production

    /full_path/env/bin/python myapp.py --workers=3
over

    source /full_path/env/activate && python myapp.py --workers=3
The latter is fine when you are doing local development in your terminals.

If you go on #python IRC channel, every year a group of helpers will collectively recommend one of the above and then perhaps a different one the following year, so please do yourself a favor, just stick to pip and virtualenv.


If you want to not infect your current shell with the new environment, you can just run it in a subshell by using parentheses:

    $ ( . /full_path/env/bin/activate && python myapp.py --workers=3 )
This works for anything that you'd like a temporary shell for:

    $ pwd
    /home/me

    $ ( cd /tmp; pwd )
    /tmp

    $ pwd
    /home/me


> pyenv is now deprecated and venv is recommended

Isn't it `pyvenv` that is deprecated? `pyvenv` != `pyenv`


Yes, you are right. See, that's the other (#%#$^#$%#$ - excuse my Chinese) confusion. Such conflicting name pyvenv and pyenv, someone really could have picked a better name, whoever came later.

Corrected in my post.


Thanks for saying this! I just spent the last 10 minutes confused by that, trying to figure out what was going on here and what I actually have installed now:

⟩ pyenv --version

zsh: correct 'pyenv' to 'pyvenv' [nyae]?


Conda is especially useful if you want to use scientific packages on Windows.


Good point, thank you. I don't do much on Windows anymore. But a little search of VS Studio (I know there are some core envelopers working for MSFT) yields this: https://stackoverflow.com/questions/15185827/can-pip-be-used...

Probably exciting for VS users.


Yeah, I use pip + virtualenv on most machines (and have for years), but conda is the only one that actually seems to "just work" on Windows.


Apologies if I'm missing something obvious (relatively new to Python), but what's the advantage of loading it the 1st way? I'm currently using conda and do something similar to you're 2nd example `source activate conda-env && python myapp.py`


I'd like to hear what people think as well, so everyone please fire away. The 1st option is very explicit and doesn't modify your environment. The problem I have with the latter option is not just messing with my environment variable (bulk of it is setting PATH), but what will be changed in future releases. I don't have time to read the activate script (btw, there is one activate script for sh/bash/zsh, csh, fish, and bat, but if you are none of them, which likely a very corner case, you are on your own).

Using the 1st method, I am certain the shell isn't being modified and loaded with stuff I don't need, and the command and arguments are explicit (when I look at top, or when I do strace).

Generally, you can continue to do #2; I use it when I am doing development, like having multiple terminals up and just source into the environment, then run pytest instead of the full path to pytest. But I highly recommend #1 when you are running your code in production (webapp or not). I am not too familiar with conda so I will defer that to the experts).


python3.6 -m venv . && source bin/activate (this builds around the idea every project is a python package itself). pip install -e .


Well pyenv is to manage multiple versions of python, virtualenv is to create virtual environment, venv is virtualenv added to the stdlib, you forgot pip, virtualenvwrapper and mkvirtualenv are management tools for venvs and pipenv is also that but integrating pip.

There's pyenv, pip and virtualenv/venv (same thing) and then there's a bunch of tools to make them more implicit or ergonomic (if you like the way these tools work more than the base).

So really there's not a multitude of competing standards, there's 2/3 complementary standards and then people building their own tools atop that, not entirely unlike the multitudes of Jabber or Twitter clients we used to have.

Things were worse before virtualenv rose to prominence.


Please realize that the concept of virtual environments in Python predate things like npm, so there are lessons that were learned later on that no one knew about. Also realize that the things you list all layer on top of each other so you're listing lower-level libraries next to higher-level ones.

E.g. pipenv <- pew, pyenv <- virtualenv/venv, so that list could be likened to complaining that Python, C, assembly, and CPU microcode is "too many". Sure, you could argue they are usable independent of each other, but you aren't about to say that there are too many pieces of tech there and we should stop and sending raw electricity to the CPU.

I think that list could legitimately be cut down to pipenv and conda (maybe pipsi, but that's just for installing CLI tools and isn't for development). Everything else is lower-level than what most people will need for app development.


I use Buildout: http://docs.buildout.org/en/latest/

Buildout is much more than a Python package manager. It runs pluggable recipes, where building/installing a Python package is only one of the many available recipes. It was invented at a company where I once worked. It replaced huge piles of complicated Makefiles.

Buildout is obviously less popular than other solutions, but I think it manages complexity quite well and saves me a lot of time compared with other ways I could assemble Python software.


More than 5: I recently listed all programs for managing virtual environments I could find in a blog post [1] because I find it pretty confusing how many there are.

[1]: https://meribold.github.io/virtual-environments-9487


It's also super easy to build Python from source and manage the paths yourself, in case the magic tools fail to deliver the desired Python version.


There is only pip and virtualenv. They go together.

Virtualenv creates an application directory. Pip install the required packages into it.


> and Linux (well, Linux-ish - actually it’s WSL) at home.

Is that really considered Linux at all? I would not say that Wine is Windows, for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: