I prefer pipenv to Conda, and I don't like having Jupyter(Lab) installed in each venv separately, so instead I only add `Ipykernel` to each venv and then use my system-level JupyterLab to access per-project kernels; seems like that wouldn't work here?
I'm not sure if I understand your kernel question, but VSCode's Python extension has everything built-in. As soon as you add a #%% comment it considers what follows to be a notebook cell and automatically gives you a "run cell" button that uses your chosen Python interpreter.
I tell pipenv to store venv stuff in the project folder (`export PIPENV_VENV_IN_PROJECT=1`), and then do the following to start a new project:
pipenv --python 3.7
pipenv install ipykernel
pipenv run python -m ipykernel install --user --name=`basename $(pipenv run dirname '$VIRTUAL_ENV')
Then in the list of kernels, in addition to the usual suspects I'll have one named for the folder I ran the above in. If I started a notebook before all that, I'll have to change it's kernel. Doing more `pipenv install` at the prompt makes new packages immediately available in the running notebook.
Cannot wait to give it a try.
I submitted this a week ago, but too bad nobody cared. https://news.ycombinator.com/item?id=19794865
It never caught on outside of the R community, but the format itself is language-agnostic.
R Markdown and its derivative are such great tools. I would also suggest xaringan if you want to present your work on a big screen.