Hacker News new | comments | show | ask | jobs | submit login
JupyterLab is ready for users (jupyter.org)
958 points by thenipper 5 months ago | hide | past | web | favorite | 248 comments

I'll be teaching a group of nine and ten years olds to code. I'm planning on using Jupyter.

What I'd really like to do is make a multiplayer naval game, with each player controlling their ship from their own notebook. Players would start out by running commands like fire(range=400, bearing=120) right from a cell, but would later be able automate their ship - for example, pick the nearest enemy, get the range, and plug that into firing automatically at it.

My server would be projecting a big map of the world up on the wall.

However, to do this nicely, I need the ability to make a cell (or a function defined in a cell) run every X milliseconds. I know I can do this for one cell, with a loop and sleep function, but I'd really rather have multiple cells/functions "running", so we can break the code into smaller chunks, and to let them build their own ship UIs.

Any advice on how to do this in Juptyer with Python? In an ideal world, I'd just "tag" a cell somehow so that it ran periodically.

To be totally honest, even though I love Jupyter notebooks, I wouldn't teach someone to code using it. See, as someone who's not, originally a programmer, I believe that teaching someone to code is intertwined with teaching (gradually) workflows and environments.

Now, Jupyter is great (I'm a statistician btw) precisely because you have a presentable literate programming tool. That is extremely valuable for data analysis and model building.

Now, If you are teaching programming in it self, I think the terminal + text editor is important since that is a step towards how things are done irl. But if you think that the terminal is too much for your audience, just go for IDLE, which has many advantages for python development + teaching, and is already a step towards text editor + terminal or IDE level development.

> Now, If you are teaching programming in it self, I think the terminal + text editor is important since that is a step towards how things are done irl.

Jupyter (and similar) Notebooks are a way things are done "in real life"; they aren't just for entertainment purposes; and JupyterLab blends that into an IDE and the IDE with notebooks approach is also a way things have been done "in real life"; both with third-party tools incorporating Jupyter (and earlier IPython) notebooks, and with other IDE's incorporating notebook-style interaction.

Which isn't to say that there aren't arguments for teaching program outside of the notebook environment rather than within it (though the notebook is kind of a super REPL, and REPLs seem to me to be great tools for teaching), but that not using notebooks is better because it is how things are done "in real life" is just taking a highly selective view of real life.

I'm teaching someone to code right now on the Raspberry Pi, and JupyterLab is working far better for us than IDLE or a separate text file and terminal. We use the workflow described in http://jupyterlab.readthedocs.io/en/stable/user/documents_ke.... We open up our python file and a console side-by-side, and pressing shift-enter in the file sends a line or selection to a console. We can easily investigate it in the console outside of the file too.

You can just "pip install jupyterlab" on a Raspberry Pi and it works great!

These criticisms could be valid for Jupyter Notebook, but remember that JupyterLab is what's under discussion in this post. JuptyerLab is much more of an IDE-style than Notebook, and it includes very capable terminal and text editor components.

Although I'm new to Jupyter Lab, I just tried it and it does feel like it could be a very approachable IDE for educational use.

Your question relates more to IPython Kernel than Jupyter per se. Fortunately ipykernel has out of the box integration with event loops, e.g. https://github.com/ipython/ipykernel/blob/master/ipykernel/e... for asyncio integration, and you can then schedule cell updates on the asyncio event loop as you would normally. To have events on the Python side generate visible updates in the Jupyter notebook you can use ipywidgets, for which the Python widget state in the kernel is automatically synchronized to the JavaScript widget state.

Thanks! This looks like exactly what I was looking for.

FWIW there's a Snoopy/Woodstock game on code.org that's a much simplified version of this where players control avatars who throw snowballs. It's turn based, each player programs their avatar with 6 different commands (IIRC, u d l r jump fire). Fun and very accessible for my 8yo and I.

Can't find it on CODE.org any more, it's mentioned here http://www.gameinformer.com/b/news/archive/2017/11/10/little....

The simplest (and still educational) way would be to have the stuff in the different cells as functions, and to collect all the functions that should be re-run in a single cell.

To do it more properly, however, you can have a post_save_hook for the whole notebook, which runs the code from whichever cells however often you want outside of the notebooks. Of course, this way you wont see the output in the notebook.

Yeah, output right in the notebook is important. I'm wanting them to be able to do something like this in a cell:

    for ship in myShip.radarScan():
        if ship.is_enemy:
            display([ship.type, ship.range, ship.bearing])
And then have that cells results always showing the current data as they play the game.

I haven't used it much, but I believe you can do this by using (or more likely slightly extending) the init_cell nbextension.


Oh, that's cool - it's just some javascript that's essentially clicking execute on the selected cells. That makes sense, and hacked up version of this could work fine! Thanks!

If you get this working my cousin teaches elementary school and I'd love to show it to her! Please upload it if you're feeling generous :)

I'll definitely post it to GitHub, and I'll do a Show HN after I've used it in class.

> And then have that cells results always showing the current data as they play the game.

display handles that you can update from elsewhere in the notebook might be of interest. Here's an example:


> multiplayer navel game

I don't think you mean this, though it could be intriguing just as well..


Nope! Thanks for the catch!

Related to that idea, there is a game called screeps [0] that has a lot of that general idea and executes it fairly well. The payment model discouraged me from sticking with it, sadly.

[0] https://screeps.com

What you describe sounds quite similar to the old Robocode project for Java programming, it may be worth taking a look at it for ideas/implementation: http://robowiki.net/wiki/Robocode

This is actually a good idea for this age range. If this was a college class I would recommend a more traditional programming environment, but this should be just about as cool as you can get if you can pull off the networking.

I haven't looked a Jupyter at all; in plain Python, I'd start with something like this[1]:

  schedule = set() # (time, function) pairs
  while True:
    sleep(next[0] - now())
    schedule.add((now() + interval, next[1]))
[1] Pseudocode only; I don't have the API reference in front of me to verify method names and such. Several edge cases are unhandled.

Hi Teacher - please teach them without war/combat games, there are so many fun things to do that don't involve guns, battles, fire, enemies :)

I know that matplotlib.pyplot has a sleep function used to animate graphs. May be worth investigating.

> Any advice on how to do this in Juptyer with Python? In an ideal world, I'd just "tag" a cell somehow so that it ran periodically.

Don't. Use something that will engage them visually and mentally. Kids are not impressed with text, that ship has sailed.

Isn't that what the map on the wall is for?

Absolutely love JupyterLab. Having used ipython for years, I switched to Jupyter Lab about 6 months ago and never looked back.

The things I'm most impressed with (relative to Jupyter Notebooks, which were already amazing):

- The ability to render .geojson, .json, markdown, Vega and Vega lite files, and integrate external tools like Voyager.

- The new terminal is a joy to use compared to what came before it

- The ability to set out multiple windows easily, much like an IDE

- The plugin ecosystem means that we can start writing custom components for the analytical platform we're building.

Thanks so much to the team!

Also the modular packages they have built are amazing. They have a node package called @jupyterlab/services which lets you write typescript code for your own libraries that can then call the Jupyter Server.

An example of using JupyterLab services is provided on their GitHub:


Because of the way they built their packages I have been able to stand on the shoulder of giants and build the following tool:


I am very grateful to the JupyterLab team. They have built something brilliant.

Holy shit. ScriptedForms is amazing. Is there any possibility you would consider selling a dual-licensed version that is not AGPL3?

I am a heavy notebook user, but have just decided to switch to Jupyter Lab 5 minutes ago. This will make working on a remote server so much nicer, just start Jupyter Lab and an ssh tunnel and you have a terminal plus notebooks plus file editors all in one browser window. Great improvement, thanks!!

Never heard about Vega lite to generate graphics. How better it is compared to using matplotlib? It is a lot of effort to learn a new graphic lib. I'm already a little proficient in mathplotlib. Should I learn it?

If you like Juypter, Google colabs has a google-docs style juypter notebook that’s quite good. It’s nice for collaboration.


Are there any plans for the JupyterLab and colab to unfork? Parts of colab seems nice but this feels like a google product that could get pulled at any moment

There's also an official Google drive extension for jupyterlab worth checking out. I know they demoed it a few times before the v1.0 release it looks very promising.


Also worth checking out is the renderers, the vega and geojson ones are really cool.

> There's also an official Google drive extension for jupyterlab worth checking out.

As the readme states, the realtime API it relies on has been deprecated.

As far as I can see there's no way for a user to set the size of their infrastructure.

Yeah, you can connect your GPU for machine learning, but that seems to be it.

I've read Google Colaboratory provides 12 hours of free GPU usage. Even without using the GPU, I find the Colab version useful

And they now support both Python2 and Python3.

How much public information is there about how much support this will get outside of Google?

very little is likely the answer

Mathematica is wonderful in terms of sheer computational power, but the notebook interface it presents is hopelessly outclassed nowadays by initiatives such as these. I keep hoping Wolfram will spring some impressive new interface on us that will enhance usability for power users (rather than their weird attempts at bringing ‘computation’ to random casual users), but... I'm giving up hope.

This looks very impressive.

As someone who used Matlab and Mathematica in college, I'm not sure I'd want to have a closed source solution really take off again at this point. While they provided great products and documentation, they also made it much more difficult to share code/visualizations due to restrictive and expensive licensing. Ultimately, I think the ability to share information easily should help spur scientific advancement.

As a single piece of anecdata, this is exactly what prevented me from using the free licenses I had through my university to learn these programs. After my experiences with Stata, seeing the non-affiliated prices and having to do my work in a computer lab instead of on my own laptop, I didn't want to be tied to software that I might not have access to in the future, and that I couldn't share with other people.

The one time I made a handsome demo in Mathematica, I realized that there was no straightforward way to share it, so I gave up and redid it in R.

I'm grateful I never spent time learning those programs. I still miss Mathematica now and then, but free software is the only way to go for me nowadays.

Dumping SolidWorks for the less elegant but much cheaper Fusion360 for the same reason. You can’t build a community at corporate license prices.

Mathematica/Python/MATLAB user. Mathematica and MATLAB do both have great documentation and intuitive examples shipped with their product, however, document your own code using their system is surprisingly hard (compared to Sphinx). Jupyter learnt the idea of the notebook from Mathematica (and some other notebook based math software) but the killer point is that it extends itself to more and more general purpose programming language with Python, R, Julia and even C/C++ kernels. MATLAB still popular in the field of control engineering, communication partly because of its SIMULINK simulation environment. I have been looking for a Python alternative but the closest I have found so far is Modelica. In my workflow, Mathematica is used to derive the mathematical formula, Matlab for simulation and finally software is written and distributed as Python package.

You could try the SciPy Simulink look-a-like https://www.scilab.org/scilab/gallery/xcos

SciLab != SciPy

Same curious, it has Python extension but not really close to it.

My workflow is very similar. But I am often forced to use matlab for simulation because the vendors I work with only deliver matlab libraries. They choose matlab because many engineers know how to use it and have licenses for it but also it gives mechanisms to protect IP which is a big deal to many firms.


I assume this situation also holds true for those using LabView to build their virtual instruments. IP wasn't really an issue for us since we have decided to open-source our project to broaden the knowledge of the public: https://github.com/Maritime-Robotics-Student-Society/sailing...

As a note, in Python, there is pyconcrete for code encryption. https://github.com/Falldog/pyconcrete

You might want to try sympy or sage as a mathematica replacement.

I share your sentiment, but in truth, Mathematica's symbolic algebra capabilities are above and beyond what everything else offers by a wide margin.

For now

For last 20 years at the least, and the gap widens if anything

Sure, Mathematica has had a long head start. I don't see the gap widening at all, though. For me, e.g. SymPy (and more specialized algebra systems for e.g. quantum mechanics I've built on top of it) now match and sometimes surpass what I could do with Mathematica. More importantly though, Mathematica's closed nature makes it quite hard to integrate with other systems and workflows (as others have pointed out in this thread). The benefits of the open scientific ecosystem around Jupyter far outweighs the few areas were Mathematica still leads in functionality. This is a subjective assessment, of course.

Sympy is great (especially the ability to automatically codegen numerical expressions from symbolic ones), but I've found it to be orders of magnitude slower than Mathematica for some relatively simple problems. I also find the sympy documentation to be somewhat sparse and hard to navigate.

A number of years ago I knew a postdoc who bought a student copy of Mathematica at the university bookstore so he could use it as part of a short-term collaboration without dropping a non-trivial chunk of his grants on a full license. Shortly after he installed it, the department chair got a personal email from Stephen Wolfram asking what someone who wasn't a graduate student was doing with a student license.

This supported the upward trend of interest in open source and especially ipython around the labs.

That’s pretty horrifying.

That's all nice but you go to real-world space/robotics/drones etc. shops and they all use Matlab or similar for their scientific computation and validation and treat Jupyter stack as a toy. Similar to telling a photographer to use Linux because there is GIMP and that should be enough for them.

I think a better analogy would be how large swathes of the population use Windows for everyday computation and treat Linux as a toy, whereas individuals with more domain knowledge take Linux seriously and use it in production in a variety of environments (e.g., high frequency trading, web hosts, commerce sites, etc). Engineers may prefer MATLAB as a "serious" development environment, but most software developers find it to be seriously lacking. Most general purpose programming languages are far more powerful than MATLAB, but simply lack the toolboxes/convenience. Both of these points are nothing new and have been observed since the creation of MATLAB (indeed, the latter is the reason why MATLAB was created, since MATLAB is basically supposed to be a pretty face for FORTRAN).

Yeah, I work at a startup where we specialize in signal processing + ML. We have a robust set of infrastructure in MATLAB (use our own ML libraries rather than MATLAB's libs) and try as we might, we haven't been able to switch altogether to Python (despite a number of us being fans of Python).

MATLAB has a TON of things that you don't get with Python + Numpy + Scipy. Complicated plots/graphics with interactivity are a pain point in Python compared to MATLAB. Similarly, the debugging capabilities in MATLAB are truly magical compared to a pretty terrible experience on the Python side of things. Even though we deploy software in Python, we are much faster prototyping in MATLAB and deploying finalized algorithms/software to Python than trying to do everything in Python from the get go. MATLAB's JIT is also pretty great and while Numba is pretty great, it still requires more work and can be brittle at times.

Have you tried bokeh for interactive visualizations? https://bokeh.pydata.org/en/latest/

You can really do photography on Linux nowadays though. Darktable and rawtherapee are shaping up nicely.

I am a pro photographer and still have to use Lightroom for proper color adjustments. I even had competitions with Darktable pros to see who can get a better picture and unfortunately Lightroom was outclassing them (and I hate Adobe's forced subscriptions). It still needs a lot of work to be cutting edge.

Can you share some specific weaknesses Darktable has? (I've never used either.)

But Matlab has much less restrictive terms when it comes to code, right?

And then there's LabView, used widely for lab and process automation, which really has no open-source or free alternative.

Last time I've tried Jupyter, about a year or two ago, the user experience wasn't even close to that of Mathematica. I guess I'll give it another chance, but can you tell me in what way Mathematicas notebook interface is outclassed, in your opinion? The only major beef I have with Mathematica are its default stylesheets. They suck, every single one of them. There is no good distinction between input and output fields. Whenever I install Mathematica, the first thing I do is to make the "Natural Color" stylesheet [1] the default but unfortunately, it isn't even present in the options anymore since a few versions ago.

[1] https://imgur.com/a/83ZyF

I agree here. Being able to write code in the mathematical rendering is such a beautiful feature of Mathematica that Jupyter is missing. I am not sure how it's outclassed as a notebook because of this. As an IDE? Sure, Mathematica's notebook isn't a good IDE at all. But as a notebook for writing mathematical statements, it's fantastic. It's the one thing I need in order to go to an only Julia workflow.

> Being able to write code in the mathematical rendering

I wonder, what do you mean by this? You mean it's possible to write code in the Out[] cells in Mathematica? (is that possible?)

Or do you mean writing math in the notebook?

People often overlook the eclipse IDE for Mathematica...


Yours is a good question, and I can't really provide a good answer. Firstly, I suppose I wish for something more akin to an IDE with a proper list of variables, functions, classes, & cetera; tabs; and as the screenshots of this JupyterLab beta release show, a capability for displaying different data types in different self-contained windows that are not in-lines to the notebook document.

I probably expressed myself very poorly.

Nowadays, the Mathematica Notebook is probably Mathematica's biggest brake. For me the GUI feels slow on any computer. However, as already mentioned in the other posts, under the hood Mathematica has not yet been beaten for it's LISP attemp of computer algebra. The biggest contrahent in the Julia/SciPy environment is probably the SAGEmath computer algebra system, however it follows more the route of Maple which is more natural to the employed Python host/meta/binding language but (for me) does not feel as powerful as Mathematica when it comes to symbolic computing.

I think the big advantage of Julia against Mathematica is the modern software stack and the bleeding edge technology they can embed very quickly, such as d3js, WebGL, etc.. The web world is moving quickly and the scientific python community can keep up while Mathematica moves only very slowly -- note they started their web interface (cloud version) only a few years ago.

Mathematica still has some parts that are nice though. Especially interactive plots (which you can do in IPython but not as simply) and some of the neat metaprogramming stuff. A big thing is that Mathematica (the language) is homoiconic, which I think works really well for their interface.

But I prefer IPython as well. Mathematica is awesome, but Wolfram is pretty heavy handed with the proprietary stuff.

Edit: one thing that I'd really like to have is some of Mathematica's tools for managing scope, like Module and similar. You can kinda do this, but it's clunky. And on top of that, some way to limit declarations to a section (group of cells). It's kinda awkward to have multiple separate sections in an IPython notebook because your declarations start to overlap. Unless someone here knows how already?

That's more a restriction of Python, though. It's important to remember that the Jypter Notebook (and now, JupyterLab) is language agnostic. The Julia language, for example, is very well integrated into Jupyter (which, in fact, is a backronym for Julia-Python-R), and Julia is homoiconic.

I also like Mathematica, and I think the curated data is amazing for impromptu explorations. I guess there are more and more sources of data available online for Jupyter to query, however, the Mathematica API and access to this curated data is effortless.

I first used Mathematica in 1992 in a NeXT lab at Ohio State. It seemed like magic. If I recall correctly it had a notebook interface way back then, it was so cool doing 3D plots and tinkering with values to get a better intuition of the math.

At http://www.mathematica25.com/ you can see that the notebook front-end was in Mathematica in its first release in 1988.

One of the images from 1987 says "The Mathematica front end begins to take shape…" and "(Theo Gray invents cells and groups … and other things still seen today…)".

Did a legit Owen Wilson woooow on that one. What a flashback, thank you...

Loving Jupyter, I use it every day and for sure I will try this out. What I do miss though is good separation between code and data, it is a pain when someone just takes a look at your notebook and it autosaves, the code block counters reset, this alters the file and GIT reports a lot of alterations.

Here is a pip package called "nbstripout" which tell git to ignore notebook output: https://github.com/kynan/nbstripout It can really help establish good practice in a project with little effort:

    pip install --upgrade nbstripout
    nbstripout --install

This so much. It is my only real complaint about notebooks. Would be so much nicer if a notebook was split in two files, like `my-notebook.input.ipynb` and `my-notebook.output.ipynb`, where `my-notebook.input.ipynb` would only contain code and be editable with any text editor similar to a .md file (and not some verbose xml). The output would contain all outputs, so that would be easily separated from the input if needed.

I can see the benefits of stuffing everything into a single file, but separating would be so much better IMHO. Version control is too important to mess with. Sometimes I want input and output to be version controlled, sometimes I only want the input. By splitting I can easily do that with simple .gitignore rules.

The notebook server have "contents managers" which decide how notebooks get stored. It is perfectly possible to write what you request, and some users have done it: https://github.com/aaren/notedown here without the outputs but it's easy enough to add.

The other possibility is to export a notebook as an actual files and folder tree: https://github.com/takluyver/nbexplode so rich object (png, svg... etc) are independently editable.

It though can be challenging to have work well because of different filesystems.

You can even go further and tell the server to store nothing on disk but in a database, postgres for example :https://github.com/quantopian/pgcontents

> Version control is too important to mess with.

Version control is too important to be left to the content-blind tools we typically use for it. In a perfect world, there'd be a core version control engine with content-specific plug-ins.

I feel your pain. Have been using git-filters for this, specifically this repo: https://github.com/toobaz/ipynb_output_filter

It strips all notebook output from *.ipynb-files before commit.

In JupyterLab, under the `Settings` menu, there is a setting you can toggle (Autosave Documents) if you prefer to turn off this behavior.

This is really exciting for the team! However I don't think that I am particularly sold on the notebook style of coding.. its possible that I simply haven't found a good use case for it, can anyone suggest an example where the notebook style outperforms a simple script based style?

For reference, I use Matlab and Mathematica pretty heavily, and python in a text editor like sublime along with a terminal running ipython shell.

I find notebooks to be great for prototyping longer pipelines or processes. Instead of having to constantly get fresh data, particularly if it's from an external API, the notebook can persist the data in memory and you can iterate on the next piece of the process right there.

I then take that and make it a more formal script/process w/ version control and all that fun stuff. They're also really great for learning. I just wouldn't put them in production :-)

I'm using Python for lab automation, so my data comes fresh from an experiment. I've found that keeping stuff in memory is extremely convenient until the kernel shuts down for some reason (e.g., I inadvertently kill it while forgetting that I've left a notebook open).

Still, I love having my data collection scripts documented right there with the subsequent analysis. So, I've disciplined myself to handle experimental data in one of two ways:

* For "small" data, format it as a Python thing (list, dict, whatever is appropriate), and paste it into the next cell as an input. I haven't found a way to do this automatically, and I'm careful not to make things too automatic lest I run a cell and over-write old data.

* For "big" data, dump it to a file. I just turn the system time into a filename, to avoid over-writing an old file.

I don't think I've come up with the last word, on using Jupyter as a self data collecting lab notebook, nor am I yet 100% certain that it's even a good idea. This is a work in progress, but much better than anything else I've ever tried. For complicated experiments, I still create stand alone Python programs to control things.

This is the no. 1 reason I use notebooks. I recently worked on a Python library for an undocumented API that returned broken, non-semantic HTML. Counting spans, parsing inline styles - that kind of hell. I honestly don't think I could've done it without Jupyter.

I do the same thing with Emacs + Elpy, I love interactive programming!

I'm still not sold personally—it seems like the in-memory persistence is only useful for the intermediate case where my data is slow enough to generate/obtain that I don't want to run the code every time to do so, but fast enough that I don't mind running it every time I launch the editor. Most of the data I have that's worth caching due to speed is worth caching to disk. Combined with unpredictable side effects of variables persisting whilst I'm actively hacking on the code and implicit in-memory persistence is pretty off-putting.

A recent workflow I've had for a data analysis project is to have each stage of data processing in a separate function, with all the functions called in order from an " if __name__ == '__main__'" block, with all but the function I'm presently working on commented out. Each function returns nothing, but saves its data to an HDF5 file. Other functions read the inputs they need from the HDF5 file and write their outputs to the same file, and if I want a fresh run I just delete the file, uncomment everything in the '__main__' block and run again.

The functions also save output plots to subfolders.

This is compatible with version control, and caching on disk rather than just in memory.

The biggest downside compared to Jupyter notebooks is lack of interactivity in the saved plots (I can make interactive plots pop up of course but they're all in separate windows all at once so it's less clear which part of the code each plot came from), and lack of LaTeX in code comments - I still will have external LaTeX documents explaining what algorithm I'm using somewhere.

So for now, the downsides of notebooks with respect to version control, data caching and extra state that I have to remember in order to not hit subtle bugs in my code as I hack on it, seem to outweigh the upsides.

Maybe what I would like is an editor that renders LaTeX in comments, and which embeds arbitrary plot windows at given points in the code, but without any data persistence, and without the embedded plots actually being saved anywhere - your file is still a normal Python file and it's just the editor rendering things that way based on magic comments or something.

Or maybe I should just write a decorator that renders a function's docstring as LaTeX and embeds any matplotlib windows produced into one scrolling document with the sections named after the decorated functions. Decorator could take an argument telling it whether to include the full source of the function, the comments of which it could also render as LaTeX. Then you have input code compatible with your favourite text editor and version control, and an output document which optionally includes the code.

Nice! That seems like a great way to go about using it.. I'll have to give it a shot for my next project :)

Using python in an editor next to an ipython console is exactly the sort of workflow that JupyterLab supports. See https://jupyterlab.readthedocs.io/en/stable/user/documents_k... for a walk-through of how this workflow can be used in JupyterLab.

I'm using RStudio notebooks heavily in my latest bioinformatics analysis pipeline. They're a great way to produce an HTML report containing code, exposition, results, and plots all in one place.

https://github.com/DarwinAwardWinner/CD4-csaw (look at scripts/*.Rmd)

I wrote an article a few months ago on the differences between R Notebooks and Jupyter Notebooks (and why, IMO, R Notebooks are better): http://minimaxir.com/2017/06/r-notebooks/

R notebooks are the bees knees and frankly I'm surprised Jupyter hasn't borrowed more from them. It's so much easier being in plain-text until render time, and the output is easier to manage because you can trivially decide what chunks you want to echo, evaluate, plot at 2x size...etc without any change to the interactive usage. Not to mention you get to retain your nice IDE features like good code-completion, doc lookup, version control...

I am excited for Jupyter Lab and it's a step in the right direction. But it feels a little bit like they're reinventing the wheel with some of this stuff. I would gladly pay money for a python copy of the R ecosystem with RStudio, R markdown, R notebooks, where everything just works great by default.

Yup, revision control is the elephant in the room with Jupyter, and why I struggle to recommend it for reproducible research.

R Notebooks followed the org-mode model of keeping a simple, revisionable document with code interspersed.

Thanks for sharing... I'm always excited to see examples of RStudio notebooks in the wild.

Besides the other replies, notebooks are also great for an extremely easy literate programming style, i.e. when you want to explain in text or images as much as you want to do. Not only is that easy to create as a notebook, it's easy to share.

This makes sense, it seems like one of its intended purposes is as a great pedagogical tool which I think would work well.

It's hugely helpful in the consulting world. We use it all the time for proof-of-concept type work -- it's much easier to present a notebook to a CTO than a bunch of scripts.

Better than PowerPoint to management in some cases!

It makes sense for one off / exploratory idea style scripts - exactly the sort of thing that scientist will do, which is probably why its so popular with them.

If I'm writing code for data science purposes and I'm not planning on putting that code directly into production (i e. exploratory analysis, general offline analysis, etc.)

rapid experimentation, and embedding graphs/images. I use it whenever I want to try new things with CV/ML/etc.

So I can now use a decent text editor (emacs) to edit Jupyer notebooks? Great! I do wish it was possible to interleave languages like with org-mode, though. I've yet to find a literate programming/notebook format I'm truly happy with.

Edit: oh.. I misread. It doesn't support using external editors. All I want is some way to edit those text boxes with another program. I can't do any serious work in a web browser. It's awful.

Have you tried "EIN" in Emacs? https://github.com/millejoh/emacs-ipython-notebook

I have used it quite productively for a while, but at the moment have mostly moved back to the browser for my notebooks. I can recommend to collect larger functions in a separate source file (for Emacs editing bliss) which you import to the notebook. [import helpers; reload(helpers)]

I did try it, but it seemed to break "undo" unfortunately. That's something that will make me stop using a package immediately. I also tried ob-ipython for org-mode but found it not ready unfortunately. I think I will have to do what you suggest and just minimise the amount of code written in the notebook itself.

This is great for working with python - https://github.com/gregsexton/ob-ipython

I haven't had as much luck using it for other languages, but I also haven't put in much effort into trying.

Thank you! I've been looking for this and didn't even realize. ob-julia is awful; really hope this can be made to work better. Also several obscure languages I've wanted to include snippets of in org that actually have Jupyter kernels...

You can evaluate different cells using different kernels, by using a cell magic command. So a notebook can have a mix of languages. Is this something like what you are after?

It sounds like that is essentially the same thing, yes. It still won't be as good as org-mode, though. With org, when you enter some Python code you press a key and you're editing that block with the best editor for Python (emacs) and when you edit bash code you're using the best editor for bash (emacs). With this, you have to use the same basic editor for everything.

There is the %edit magic, so I suppose wiring up emacs as editor for Jupyter is not that far-fetched


Not suggesting that it does right now everything already, but rather that with bit of coding implementing similar thing for cells does seem feasible.

On behalf of Azure Notebooks team, a huge congrats to the team!

If you like to try it, pick any of your libraries, right click and select “open in JupyterLab”.

Much love from the Jupyter Team to the Azure Notebook team !

Great was hoping you would support it. Loving azure notebook.

is there an easy way to run azure notebooks on gpus?

I'd really like 1 thing from those articles. I didn't knew what Jupyter was in first place.

So please dear authors when I click on your articles I'd like to have a single sentence somewhere on the landing zone where I can easily figure what we're talking about and not having to read entire paragraphs


The source article is a blog entry on the Jupyter project's blog; i don't think its unreasonable to expect that readers of the Jupyter blog have some idea what Jupyter is; its kind of unreasonable to expect every blog entry to repeat that.

Now, readers of HN might not know, and HN's decision (which is, on balance, I think beneficial) to not allow additional supporting commentary besides the title on posts with links to outside articles prevents contextualizing this well for HN readers. (Perhaps allowing one or a small number of supporting links with very brief annotations might be an improvement, but we really do want to avoid Slashdot-style editorializing of submissions, which the current setup does quite efficiently.)

The closest thing to what you're asking for is the first image in the post, which is a screenshot that has this caption:

> JupyterLab is an interactive development environment for working with notebooks, code, and data.

But what are jupyter notebooks? Had to Google it to find out. Still not sure what it does.

This isn't meant to be snarky, but maybe this post just isn't for you, in the same way that posts about Ruby on Rails simply aren't for me.

On the other hand, if you use Python, you should definitely check out Jupyter notebooks (formerly IPython notebooks, and now JupyterLab, I guess). They're useful when prototyping data pipelines, since the state of the interpreter is saved, letting you iterate on ideas and see the outputs quickly.

If you like Jupyter you might also like https://beta.observablehq.com/ which was released recently, in case you missed it.

I love the term "reproducible computational narratives" from their post. This is a great step toward making software accessible for everyone. In addition, imagine how transparent governments and open source communities can be by helping explain the algorithms in use with a clear provided sense of understanding for everyone. On the project level, it can be useful for keeping it simple. It's all good stuff!

Though a year old, https://www.oreilly.com/ideas/the-state-of-jupyter might interest you as well.

So let's say I am progressing inside a jupyter notebook top to bottom. I run code blocks, then some markdown, then code blocks, and so on.

At some point I need to drop down to the terminal to run something. I run commands in the terminal I collect some results or collect some info and go back to my notebook to resume my work inside it.

Later I need to look up something in a text file. I open a certain text file. Browse to a certain line number, read that line, maybe edit the text file, and close the text file to go back to my notebook.

Does JupyterLab keep a record of the point in my progress in the notebook when I switched to the terminal or the text file, what I ran in the terminal, and what info was used? If I edited the text file, what was before and after of the text file? In other words, does JupyterLab help with the chronology of workflow events?

If not, I don't see how this is anything other than hundreds of "IDE"s out there.

Notebook format has its own issues, but going back to IDE is not a solution. Offering both notebook and an IDE at the same time and leaving it up to the user to make the best of the combo is not a solution either, unless the offering helps some kind of a way of eliminating the cons of either format.

Jupyter doesn’t track the execution history of your cells like this.

However, you can run the shell commands straight from a notebook cell (use the %%bash cell magic or prepend the line with !).

Not sure what to do about editing data files. If you can do this with something like awk, just use a shell magic cell, but if it needs to be done manually I guess you’re stuck manually documenting this in markdown?

Replying to my own comment, I have a plot and a text file open side by side. I want to change a value in the text file and I want the plot to be updated automatically. The update might not be trivial, behind the scenes maybe the text file is an input to a simulation that runs for a minute, then computes some values in a table that are used for the plot. Can Jupyter allow you to do that?

Point being, offering an IDE in 2018 is not interesting unless you added something "smart" to the IDE that makes the life of the engineer/scientist easier compared to the rest. Otherwise, IDE's are being developed for the last three decades or more.

With respect to your parent comment: the IDE can be made to log UI context switches, but it cannot interpret intention. This is a very unusual and I suspect very sub-optimal thing for an IDE to support, whatever year it is supposed to have been built. If you really want those events to be recorded, you need to either annotate your code, or programatically have it invoke something.

As for graphics updating on file-change event, that can probably be supported through an extension. This is similar to how certain LaTeX editors automatically re-render the doc on a change-event.

At the same time, the community has faced challenges in using various software workflows with the notebook alone, such as running code from text files interactively. The classic Jupyter Notebook, built on web technologies from 2011, is also difficult to customize and extend.

Do I read this correctly as hinting that Jupyter Notebook is being replaced by an IDE?

We should disambiguate the term "jupyter notebook" here. Jupyter notebooks, as documents that contain code, markdown, math, widgets, etc., are a central feature of JupyterLab. Jupyter notebooks are not going away, and are getting better in JupyterLab.

The "Jupyter Notebook" web application (i.e., the browser application that was originally released in 2011) will eventually be replaced with JupyterLab.

Having used JupyterLab alpha extensively, I don't think so. JupyterLab makes the 'notebook' one type of document, rather than the only type of document.

So it extends Jupyter Notebooks by giving you new IDE-like features, whilst retaining the ability to write notebooks.

> JupyterLab 1.0 will eventually replace the classic Jupyter Notebook. Throughout this transition, the same notebook document format will be supported by both the classic Notebook and JupyterLab.

Guess so :-)

Jupyter was already one of the greatest Python projects out there, this just takes it to another level. I can't wait to use this. Enormous kudos to the Jupyter team.

I very heavily rely on the "official" notebook extensions from https://github.com/ipython-contrib/jupyter_contrib_nbextensi...

This especially includes things like "Table of Contents", "Variable Inspector", "Ruler", and "Execute Time". How easy will it be to have all of this functionality in the JupyterLab notebooks? There's certainly advantages to having data/terminals/notebooks in an IDE-style layout, but for the moment it would still be two steps back, one step forward for me personally. This is to disparage the effort, JupyterLab clearly is the future!

First of all these are not "official" (hence why it's a different organisation on GitHub) and all is maintained by community.

It will take some time to port all the existing extensions, but the good news is that JupyterLab has been thought to work with extensions (actually everything in JupyterLab is an extension with no privileged component), so it will be easier to write these for JupyterLab than for current notebook.

The documentation on writing extension is also way better than for Classic notebook, and we had new contributor writing extension in 2 to 3 hours.

So we encourage you to try and send us feedback !

Will do! And thanks for all the great work around the Jupyter ecosystem. I didn't mean to come across as too critical!

Didn't meant to imply you were critical, sorry if it's how I expressed myself. Just trying to explain what did not ended in the blog post. Thanks!

Been trying to use JupyterLab over the weekend and had some issues with it. The idea is great though. I also look forward for the extensions. Data pipelines and dashboards can be built in visual applications much more quickly than with coding, so maybe we can now program extensions for that? Also, with vegalite and flask you can build dashboards from scratch and jupyterlab is great there (as you can mix code and notebooks). I think jupyterlab is very well positioned for being a complete end to end analytics tool from raw data to dashboards and visuals. If the extensions are powerful enough on the UI side, who knows? Maybe even business users could use jupyterlab.

I like Medium, but I always struggle to find a link to the company's site. Is this hidden somewhere? I had to google it to find it, not that it was hard, but I thought it would somewhere on their Medium blog.

Oh ! Thanks, we'll try to fix that ! BTW we're not a company, but sponsored by a non profit, academics and volunteers mostly.

As someone new to not just Jupyter (and for someone coming from the Microsoft c# world, mind = blown, many kudos)....do you (or anyone else reading) happen to know off the top of your head a particularly well done, information dense (not dumbed down, showing what you can do, but skipping a hand holding explanation, one can go get that elsewhere) notebook that demonstrates a broad range of Pandas & Jupyter capabilities?

Wes McKinney posted a 10-minute "whirlwind tour of pandas" [0] several years ago that still serves as a pretty good example of what is possible for a practiced pandas user.

[0] http://wesmckinney.com/blog/whirlwind-tour-of-pandas-in-10-m...

The Nature article and notebooks from a few years back are still pretty good for demos, though everything has updated since then , especially iPython -> Jupyter.




s/iPython/IPython/g otherwise it looks like an Apple product :-)

When i checked two weeks ago, there was a bug that when opened a notebook with large number of cells(~200), in firefox, it would freeze for about 10 seconds if the window was resized. The problem, as i know, only occurs in Firefox, not in Chrome.

This was problematic, especially for me, as I open documentation on other side of the window and keep resizing the window as part of my habit. But, overall JupyterLab was great. You can work on the same notebook side by side too and has a file manager/viewer panel.

Please try it again. In the last few weeks we've made some changes that drastically sped up things in Firefox, and we have plans for more changes in the pipeline.

See https://github.com/jupyterlab/jupyterlab/pull/3805 and https://github.com/jupyterlab/jupyterlab/pull/3802 for more details.

If you're on Windows, perhaps this would be nice for your workflow: http://www.ivanyu.ca/windock/

This looks like it could be quite the competitor to RStudio!

Complementary ! We love R-Studio (and JupyterHub can run R-Studio). R-Studio is still more tightly integrated with R, but we are progressing.

It would be great to have community plugins that make R-Studio able to open Jupyter files, and JupyterLab open R-Studio files !

Woohoo! Congratulations JupyterLab team. It is a brilliant thing being built.

I am going to ask a very dumb question, but if I am not an exploratory data scientist, what and how do i use jupyter/ipython for? I keep meaning to try it out but never quite got round to anything but toy stuff.

how does it fit into a developer workflow, or do i need a different mindset?

what should I try to do with this beta to get my mind right is probably the best question

No, it's a fair question.

To be honest, if you're not doing exploration or quick prototyping work (you don't have to be a data scientist though), Jupyter might not be that useful to you.

Jupyter is really useful when you have intermediate results that you don't want to keep regenerating. It lets you test different ideas at any given point in the program without re-running everything above it -- kind of like a pause button. (garden of forking paths) And if you do have to change any code, you can change things in-situ without re-running the entire program. It's like programming with a tape-recorder with mutable state.... hmm, ok maybe that isn't a good analogy, but close enough.

For quick scripts, I reach for vim and run my code on the console, and insert "import ipdb;ipdb.set_trace()" wherever I need breakpoints.

For more complex work where there are different permutations, and many throwaway branches of ideas that I have to test, Jupyter (or any notebook type tool) is way more useful.

My work recently paid for me to do an R training course - the consultant who delivered the training (via skype) used a Juypter notebook to deliver the entire course.

Imagine a power point presentation but with code samples which can execute inside the slide without ever leaving the presentation.

I thought it was pretty nifty. I don't know how many programming languages it supports but seems like a good training tool. When I was at university (back in 2003) the lecturer would constantly switch from their powerpoint slides to Emacs in order to demonstrate code this seemed like a more streamlined version of that.

I'm trying to get a JupyterHub or JupyterLab instance deployed at my university. JupyterHub has documentation on deployment, authentication (e.g,. LDAP, CAS), and so on. I'm guessing that JupyterLab will eventually get things like this. Does anyone know if there will be a migration path for JupyterHub to JupyterLab?

Read this: https://zero-to-jupyterhub.readthedocs.io/en/latest/ then see the "Customization Guide" section has a instruction on how to setup lab by default.

JupyterHub is the multi-user application which will proxy a single user application. Jupyter notebook is the default single user app, but it can be configured to use jupyterlab.

Also, it should be noted that, since it just proxies a single user notebook server, JupyterHub doesn’t support any interaction, sharing, or other collaboration between users (though one could implement some of that by mounting an NFS share inside notebook containers). Still, I’ve found it quite useful for teaching. IIRC, there’s also a project underway to bring collaboration features to Jupyterhub.

To JupyterLab, yes it's working but google retired the real-time API, so the project have seen some delay as we are reimplementing the Backend server where we were hoping to use Google Drive. CoCalc (ex sagemath cloud) already have real-time notebooks.

A key point - for me as a user unlikely to develop extensions- is buried some paragraphs down. "JupyterLab 1.0 will eventually replace the classic Jupyter Notebook. Throughout this transition, the same notebook document format will be supported by both the classic Notebook and JupyterLab."

Wanted to give a shout out to phosphor - the underlying UI framework used in JupyterLab. Used it for some non JupyterLab use cases and found it a pleasure to use. (https://github.com/phosphorjs/)

This is the closest I've seen anyone actually deliver upon the promise of OpenDoc [1] and similar tech like KParts, but in a more user-accessible packaging. I'll be interested to find out its scalability and performance. If it renders only the part of a notebook that is visible, while continuing to run threaded computations of hidden components, then that might help some of the performance limitations I'm seeing people write about. That might be important to scale up collaboration as well in the future.

[1] https://en.wikipedia.org/wiki/OpenDoc

It does render the all doc (at least for notebook) IIRC, but that an implementation choice. As long as there is a model somewhere it could only render what's visible.

Note that almost all the rendering is - for JupyterLab – client side, and that for scaling we know of Single JupyterHub deployments that have close to 5k users. Horizontal scaling of Hubs is improving, and we hope to have more robust solution soon.

Love the original version of jupyter and this looks great too. Thanks for all you guys do

I used the beta of Juypterlab and I was really impressed. Congrats on the Juypter team for releasing this! I was at Juypter Day at 2016 and they were talking about the roadmap to this feature. Its really exciting for this to come out!

This is awesome - jupyter basically drives my whole company at https://kyso.io we are gonna implement jupyterlab really soon once we've done sufficient testing.

I think you have a typo (or mobile font problem) on the homepage. "platformin" on Chrome Android.

Is it possible to have a notebook and a console using a single, shared kernel?

If you have a notebook open in JupyterLab with a running kernel, you can go to the menu and open File > New > Console.

It will prompt you for what sort of kernel to use, including the ability to use any currently running session.

I believe you can right click in a notebook and select "New Console for Notebook" as well.


There is only one mention of Python in the entire post... there are 5 mention of JavaScript. Recently, I have been mulling over the GUI creation problems and packaging problems that still plague the Python ecosystem. That being just the start of it...

I guess my love affair with Python is souring a bit. Compared to JavaScript it seems like it has slowed to a crawl on the innovation front. That is, for anything NOT related to machine learning.

I use Jupyter notebooks with Python all the time and only recently started using them with Node.js it seems like JS is just killing it...

Haha, don't worry the Python side is still quite active, and IPython is still evolving. JupyterLab is (so far) only a new UI. It still connects to all the kernels via the same protocol in the back, and we were not willing to change both at the same time. But thanks for caring and expressing concerns.

The fact that the GUI is written in JS is my point. The reports of Python's death... you heard it here first.

I'm not sure if this backs up your point or not. But I was blessed to build, using JupyterLab packages a quick and easy GUI creation tool for python packages. It lets you create the GUI itself in Markdown.


The majority of that was built with typescript.

But, the reason it is so useful is because it can run Python.

I've started using Jupyter Notebooks 2 years ago and I found it fantastic for data exploration, analysis, quick coding and prototyping.

Switched to Jupyter Lab recently and I can only recommend it. It's an absolute joy to work (I especially like the full screen mode - really great). Even for simple tasks where I used to open file in Excel by default (when I just need to take a look or do very simple operations), I now prefer the Jupyter Lab experience.

Anyway, thanks for the excellent work !

Is there a way to integrate a debugger and browse variables? I have been using Spyder for casual programming, and don’t think I could do without a debugger.

In Python, the Jupyter notebook kernels are based on IPython which has some magic commands for pdb debugging hooks.

If you want more then that, someone (maybe you) will build an extension at some point.

My coworker was a Spyder user for years, but recently switched to PyCharm because they added "Scientific Mode" in PyCharm Professional

For the classic notebook, there's a "Variable Inspector" plugin that's quite neat. See https://github.com/ipython-contrib/jupyter_contrib_nbextensi...

I'm hoping these extensions will be available in JupyterLab as well

I tried Jupyter Lab two weeks ago and found it to run unbearably slow. I'll try it again eventually I suppose, very well could have been a local issue

Please try again. We had some drastic speedups with notebooks, particularly on Firefox, in the last few weeks.

i installed it now (v0.31.8) via pip on centos7 with python 3.6. Switching to a notebook tab takes about two seconds (notebook has 45 python code cells), saving a texteditor with python code in it takes about 1 second. Chrome Version 63.0.3239.132 on macosx (15" macbook pro). "regular" jupyter notebook is instant on both things.

There are definitely more optimizations planned in the future. The optimizations that happened in the last few weeks took it down from tens of seconds to open and deal with large notebooks.

Maybe this will motivate the otherwise great pycharm team to improve their product's dismal Python jupyter notebook support

I really hope they will add a panel to inspect current defined variables, like in Matlab, that would be so useful!

The extension framework means anybody can build something like that. The beauty of it is that because that feature is so useful it's almost guaranteed someone will make it at some point.

There is already a variable inspector for the regular notebook. Hopefully someone build one for lab as well.

Does anybody knows if with JupyterLab I will be able to keep some code running, close the browser, and still get the output of the code afterwards? For what I know, currently I have to keep the browser open if I want to capture the output of a cell.

If you update the `notebook` package there should be a workaround. Messages will be buffered on server-side until a client reconnect. It is still not perfect, but get part of the way. One long term plan is to have server-side model (not sure if it will be by default, or an extension), and have the browser just be a vue on this. It is quite hard to design as many visualisation libraries assume to be in a browser context, and have access to DOM.

Buffering is indeed helpful, but will not solve the problem. I always thought that a proxy server holding the current state and sending diffs to clients would be enough, but it would still have issues with visualization libs. Is there somewhere were this issue can be discussed and addressed? GitHub?

There are a couple of issues about that, a bit scattered around. And many moving pieces. I agree that a proxy would be a nice thing, and IIRC someone made a prototype. This also hook into the JupyterLab "StateDB"/"modeldb" work and making it CRDT (Conflict-free Replicated Data Types) which is WIP somewhere; but I'm not sure where the best place is to discuss this. Work may happen on jupyter/notebook , jupyterlab/jupyterlab or maybe even https://github.com/phosphorjs/phosphor I would take my chance on the main mailing list/google group.

Would it be possible to buffer it in a way that survives interruption of jupyter/the kernel? (Maybe a stupid question. This lies outside of my expertise.)

Thanks for building jupyter by the way. It's an essential part of my workflow.

Probably, though that might lead to inconsistent state. There are a lot that could be done, like actually recording the messages and replaying them, or broadcast to have for example a live "static view" when peer programming. But for this feature to be pushed forward you either need Volunteer time, or Donation to the project to hire devs/managers/designers,... (Donation to NumFOCUS https://www.numfocus.org/ are TaxDeductible)

What put me off Jupyter is that it can't do dynamic text. I can't link to a variable in the code, such that the text updates when I run the code again with new parameters. This would be a great feature for demos/presentations.

> What put me off Jupyter is that it can't do dynamic text. I can't link to a variable in the code, such that the text updates when I run the code again with new parameters.

You can either output text (including HTML/Markdown) from a code cell, or (at least for python, don't know of similar for other languages) use the Python Markdown notebook extension to do this.

Doing something with %%javascript would seem to get you most of the way there.

I still cant get this plugin working with JupiterLab https://github.com/genepattern/jupyter-wysiwyg

This plugin is a classic notebook plugin, not a JupyterLab plugin unfortunately.

Just tried it and found it tremendously easy to use. Well done

One thing I couldn't seem to figure out is if it is possible to plot interactive matplotlib plots (for getting mouseover values zooming etc).

Anyone know of any maintained Docker images for this?

I'm not a fan of pip etc and prefer to isolate my notebooks in a docker container.

EDIT: never mind, you can run it using the official images by executing `start.sh jupyter lab`

Is it clear yet how one will use text editors (emacs, vim) with JupyterLab?

Are you referring to editing text files or in notebooks?

I'm interested in any jupyter workflows that allow me to use my text editor and a terminal-based language REPL/shell.

In general,

- I want to use my text editor to write any non-trivial function/class implementations.

- I want any substantial amount of code to be held and version-controlled in regular files of code, not inside JSON.

- I want to use the notebook for display (tables, figures, rendered markdown/LaTeX, etc)

So the most important question is:

- How do I conveniently work on a code file in my text editor, and then execute code in the notebook so that the most recent variable definitions in the code file are honored during the jupyter execution?


- How do I start a terminal-based REPL/shell that is sharing the same kernel as the notebook? (Relevant to text editors, because this might be for example an ipython shell running inside emacs, allowing me to easily evaluate fragments of code in the text editor.)

There are more sophisticated things one could imagine, but I don't think I want (e.g. evaluate a cell/notebook from the text editor, create a cell from text editor).

I see, well you might get a better answer from a Jupyter dev. I use the autoreload extension to automagically pull in my lastest code [1]. It usually works. For your second question you can connect multiple frontends to the same kernel [2].

[1] https://ipython.readthedocs.io/en/stable/config/extensions/a...

[2] https://jupyter-notebook.readthedocs.io/en/stable/examples/N...

Thanks, yes I'm somewhat familiar with the story under the current jupyter notebook. E.g. when I'm feeling very energetic, I sometimes manage to come up with the right series of shell invocations to get a notebook running in a browser and a terminal python shell sharing the same kernel via `jupyter console --existing`, with the right python version and virtualenv. Or even have the shell running in emacs, though there's usually something broken somewhere along the way in my setup.

I'm vaguely aware of autoreload but it seemed a bit confusing; there are various similar-sounding alternatives.

> How do I start a terminal-based REPL/shell that is sharing the same kernel as the notebook? (Relevant to text editors, because this might be for example an ipython shell running inside emacs, allowing me to easily evaluate fragments of code in the text editor.)

FYI, if you want to do both things inside of JupyterLab, you can easily start a console in JupyterLab connected to the same kernel as the notebook in three different ways: right-click in the notebook and select "New console for notebook". Or from the notebook, select the main menu File>"New Console For Notebook". Or simply start a console from the File>New menu and choose the notebook's kernel from the dropdown.

Not a single note on how to debug code. Or did I miss something?

Debugging story has not changed. The core team is in discussions with some large company to get funding and work on debugging protocol and collaborate.

What specifically are you looking for?

Excited for a new Julia IDE. Seems like it's a nice interactive style which surprisingly doesn't feel like it has the kludge of the notebooks.

We don't have enough Julia developers involved ! It would be great to have more feedback and patches !

I am a heavy PyCharm user, will try it as I am gradually shifting to Notebooks. This IDE should help for a smooth transition.

Give me collaborative editing please. It's a great potential feature for pair programming or instructing with Jupyter.

Unfortunately Google retired the Google Drive API, so you can't create new application the use RT. So while it works, there is no point in releasing it.

If I can have a console view simultaneously with everything else, I may finally switch to Python

Is JuypterLab available on a cloud service? I use azure notebook a lot, it's great.

It's already available on Azure Notebooks: https://news.ycombinator.com/item?id=16422217

>> If you like to try it, pick any of your libraries, right click and select “open in JupyterLab”.

How does this affects the other environments? Will this be the demise of R Studio?

RStudio has had a server-based semi-collab SKU since before JupyterHub was even in development: https://www.rstudio.com/products/rstudio/

But is it better? Worse? Will them thrive together?

What does "SKU" mean in this context?

Meant product, but brain was failing at time of comment.

Article says they are phasing out the old notebook interface but will support both for a while. Which is fine I guess, the main panel in Lab is basically the old notebook.

The other kernels and stuff will still work, I've used them some in Lab. Lab is super extensible, I wouldn't be surprised if projects like RStudio get ported just to unify things, but it is not currently part of IPython so I don't think this is relevant to it at present.

No, R and Python are fundamentally different. I do believe we will see a shift in what they are used for though. Python more for the "deep learning" stuff, and R for more statistical, non-deep-learning work.

Jupyter supports lots of languages (including R).

RStudio is probably still nicer for working R, but I haven't done any serious analysis in R for a while now.

R Studio also support Python and a bunch of other languages as well. It'll be interesting to see how this all shakes out.

What would be the best way to install JupyterLab on a fresh Mac?

is there now a viable way to create multi-user read-only notebooks ?

so that a data scientist can prototype a dashboard on jupyter and then multiple people can use the dashboard ?

It's a bit of a mathematica for the current era :)

Every programming language needs something like this!

That's why Jupyter support more than 60 languages (https://github.com/jupyter/jupyter/wiki/Jupyter-kernels) which language did you had in mind ?

Perhaps a dumb question, but do those languages tend to have wrappers for common Python libraries like Pandas?

I doubt it. Python is not very embedding-friendly. You might be able to bridge the gap via the foreign function interface, but it's unlikely that the Pandas API will feel as comfortable in another language that it wasn't designed for.

What you can do is the opposite, run Python as the main language and integrate with other. There are demos around of notebook with multiple languages. Old example I wrote: Python, R, Rust, C, Cython, Julia and Fortran all calling each others: https://matthiasbussonnier.com/posts/23-Cross-Language-Integ...

Julia does (Pandas, matplotlib, many others). Swift is getting some language features to make this kind of binding possible/simpler. IIRC xtensor does (or can) wrap Python libraries from c++.

Oh, I thought this was only for python :)

Can't wait to put my paws on this!!!

With no support for Microsoft IE or Edge, this cannot be used in numerous enterprise environments where installing alternative browsers is not allowed.

No support does not mean that it won't work. As this is an open-source and free tool, the support is announced only for the publicly available code which is regularly tested and fixed. Now if a company want to pitch in, and sell IE/Edge support that would be great. Even better would be windows developers to contribute fixes back until the quality and regular fixes on IE/Edge make it possible to officially say they are supported.

I guess we just killed the demo site.

Those sliders look sweet :)

Will this be in anaconda?

Yes, it may already be, but an older version. It will be on cada-forge at least (https://conda-forge.org/) but it may take a few hours to appear.

Does it support Intellisense?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact