What I'd really like to do is make a multiplayer naval game, with each player controlling their ship from their own notebook. Players would start out by running commands like fire(range=400, bearing=120) right from a cell, but would later be able automate their ship - for example, pick the nearest enemy, get the range, and plug that into firing automatically at it.
My server would be projecting a big map of the world up on the wall.
However, to do this nicely, I need the ability to make a cell (or a function defined in a cell) run every X milliseconds. I know I can do this for one cell, with a loop and sleep function, but I'd really rather have multiple cells/functions "running", so we can break the code into smaller chunks, and to let them build their own ship UIs.
Any advice on how to do this in Juptyer with Python? In an ideal world, I'd just "tag" a cell somehow so that it ran periodically.
Now, Jupyter is great (I'm a statistician btw) precisely because you have a presentable literate programming tool. That is extremely valuable for data analysis and model building.
Now, If you are teaching programming in it self, I think the terminal + text editor is important since that is a step towards how things are done irl. But if you think that the terminal is too much for your audience, just go for IDLE, which has many advantages for python development + teaching, and is already a step towards text editor + terminal or IDE level development.
Jupyter (and similar) Notebooks are a way things are done "in real life"; they aren't just for entertainment purposes; and JupyterLab blends that into an IDE and the IDE with notebooks approach is also a way things have been done "in real life"; both with third-party tools incorporating Jupyter (and earlier IPython) notebooks, and with other IDE's incorporating notebook-style interaction.
Which isn't to say that there aren't arguments for teaching program outside of the notebook environment rather than within it (though the notebook is kind of a super REPL, and REPLs seem to me to be great tools for teaching), but that not using notebooks is better because it is how things are done "in real life" is just taking a highly selective view of real life.
You can just "pip install jupyterlab" on a Raspberry Pi and it works great!
Although I'm new to Jupyter Lab, I just tried it and it does feel like it could be a very approachable IDE for educational use.
Can't find it on CODE.org any more, it's mentioned here http://www.gameinformer.com/b/news/archive/2017/11/10/little....
To do it more properly, however, you can have a post_save_hook for the whole notebook, which runs the code from whichever cells however often you want outside of the notebooks. Of course, this way you wont see the output in the notebook.
for ship in myShip.radarScan():
display([ship.type, ship.range, ship.bearing])
display handles that you can update from elsewhere in the notebook might be of interest. Here's an example:
I don't think you mean this, though it could be intriguing just as well..
schedule = set() # (time, function) pairs
sleep(next - now())
schedule.add((now() + interval, next))
Don't. Use something that will engage them visually and mentally. Kids are not impressed with text, that ship has sailed.
The things I'm most impressed with (relative to Jupyter Notebooks, which were already amazing):
- The ability to render .geojson, .json, markdown, Vega and Vega lite files, and integrate external tools like Voyager.
- The new terminal is a joy to use compared to what came before it
- The ability to set out multiple windows easily, much like an IDE
- The plugin ecosystem means that we can start writing custom components for the analytical platform we're building.
Thanks so much to the team!
An example of using JupyterLab services is provided on their GitHub:
Because of the way they built their packages I have been able to stand on the shoulder of giants and build the following tool:
I am very grateful to the JupyterLab team. They have built something brilliant.
Also worth checking out is the renderers, the vega and geojson ones are really cool.
As the readme states, the realtime API it relies on has been deprecated.
This looks very impressive.
The one time I made a handsome demo in Mathematica, I realized that there was no straightforward way to share it, so I gave up and redid it in R.
I'm grateful I never spent time learning those programs. I still miss Mathematica now and then, but free software is the only way to go for me nowadays.
As a note, in Python, there is pyconcrete for code encryption.
This supported the upward trend of interest in open source and especially ipython around the labs.
MATLAB has a TON of things that you don't get with Python + Numpy + Scipy. Complicated plots/graphics with interactivity are a pain point in Python compared to MATLAB. Similarly, the debugging capabilities in MATLAB are truly magical compared to a pretty terrible experience on the Python side of things. Even though we deploy software in Python, we are much faster prototyping in MATLAB and deploying finalized algorithms/software to Python than trying to do everything in Python from the get go. MATLAB's JIT is also pretty great and while Numba is pretty great, it still requires more work and can be brittle at times.
And then there's LabView, used widely for lab and process automation, which really has no open-source or free alternative.
I wonder, what do you mean by this? You mean it's possible to write code in the Out cells in Mathematica? (is that possible?)
Or do you mean writing math in the notebook?
I probably expressed myself very poorly.
I think the big advantage of Julia against Mathematica is the modern software stack and the bleeding edge technology they can embed very quickly, such as d3js, WebGL, etc.. The web world is moving quickly and the scientific python community can keep up while Mathematica moves only very slowly -- note they started their web interface (cloud version) only a few years ago.
But I prefer IPython as well. Mathematica is awesome, but Wolfram is pretty heavy handed with the proprietary stuff.
Edit: one thing that I'd really like to have is some of Mathematica's tools for managing scope, like Module and similar. You can kinda do this, but it's clunky. And on top of that, some way to limit declarations to a section (group of cells). It's kinda awkward to have multiple separate sections in an IPython notebook because your declarations start to overlap. Unless someone here knows how already?
One of the images from 1987 says "The Mathematica front end begins to take shape…" and "(Theo Gray invents cells and groups … and other things still seen today…)".
pip install --upgrade nbstripout
I can see the benefits of stuffing everything into a single file, but separating would be so much better IMHO. Version control is too important to mess with. Sometimes I want input and output to be version controlled, sometimes I only want the input. By splitting I can easily do that with simple .gitignore rules.
The other possibility is to export a notebook as an actual files and folder tree: https://github.com/takluyver/nbexplode so rich object (png, svg... etc) are independently editable.
It though can be challenging to have work well because of different filesystems.
You can even go further and tell the server to store nothing on disk but in a database, postgres for example :https://github.com/quantopian/pgcontents
Version control is too important to be left to the content-blind tools we typically use for it. In a perfect world, there'd be a core version control engine with content-specific plug-ins.
It strips all notebook output from *.ipynb-files before commit.
For reference, I use Matlab and Mathematica pretty heavily, and python in a text editor like sublime along with a terminal running ipython shell.
I then take that and make it a more formal script/process w/ version control and all that fun stuff. They're also really great for learning. I just wouldn't put them in production :-)
Still, I love having my data collection scripts documented right there with the subsequent analysis. So, I've disciplined myself to handle experimental data in one of two ways:
* For "small" data, format it as a Python thing (list, dict, whatever is appropriate), and paste it into the next cell as an input. I haven't found a way to do this automatically, and I'm careful not to make things too automatic lest I run a cell and over-write old data.
* For "big" data, dump it to a file. I just turn the system time into a filename, to avoid over-writing an old file.
I don't think I've come up with the last word, on using Jupyter as a self data collecting lab notebook, nor am I yet 100% certain that it's even a good idea. This is a work in progress, but much better than anything else I've ever tried. For complicated experiments, I still create stand alone Python programs to control things.
A recent workflow I've had for a data analysis project is to have each stage of data processing in a separate function, with all the functions called in order from an " if __name__ == '__main__'" block, with all but the function I'm presently working on commented out. Each function returns nothing, but saves its data to an HDF5 file. Other functions read the inputs they need from the HDF5 file and write their outputs to the same file, and if I want a fresh run I just delete the file, uncomment everything in the '__main__' block and run again.
The functions also save output plots to subfolders.
This is compatible with version control, and caching on disk rather than just in memory.
The biggest downside compared to Jupyter notebooks is lack of interactivity in the saved plots (I can make interactive plots pop up of course but they're all in separate windows all at once so it's less clear which part of the code each plot came from), and lack of LaTeX in code comments - I still will have external LaTeX documents explaining what algorithm I'm using somewhere.
So for now, the downsides of notebooks with respect to version control, data caching and extra state that I have to remember in order to not hit subtle bugs in my code as I hack on it, seem to outweigh the upsides.
Maybe what I would like is an editor that renders LaTeX in comments, and which embeds arbitrary plot windows at given points in the code, but without any data persistence, and without the embedded plots actually being saved anywhere - your file is still a normal Python file and it's just the editor rendering things that way based on magic comments or something.
Or maybe I should just write a decorator that renders a function's docstring as LaTeX and embeds any matplotlib windows produced into one scrolling document with the sections named after the decorated functions. Decorator could take an argument telling it whether to include the full source of the function, the comments of which it could also render as LaTeX. Then you have input code compatible with your favourite text editor and version control, and an output document which optionally includes the code.
https://github.com/DarwinAwardWinner/CD4-csaw (look at scripts/*.Rmd)
I am excited for Jupyter Lab and it's a step in the right direction. But it feels a little bit like they're reinventing the wheel with some of this stuff. I would gladly pay money for a python copy of the R ecosystem with RStudio, R markdown, R notebooks, where everything just works great by default.
R Notebooks followed the org-mode model of keeping a simple, revisionable document with code interspersed.
Edit: oh.. I misread. It doesn't support using external editors. All I want is some way to edit those text boxes with another program. I can't do any serious work in a web browser. It's awful.
I have used it quite productively for a while, but at the moment have mostly moved back to the browser for my notebooks. I can recommend to collect larger functions in a separate source file (for Emacs editing bliss) which you import to the notebook. [import helpers; reload(helpers)]
I haven't had as much luck using it for other languages, but I also haven't put in much effort into trying.
Not suggesting that it does right now everything already, but rather that with bit of coding implementing similar thing for cells does seem feasible.
If you like to try it, pick any of your libraries, right click and select “open in JupyterLab”.
So please dear authors when I click on your articles I'd like to have a single sentence somewhere on the landing zone where I can easily figure what we're talking about and not having to read entire paragraphs
Now, readers of HN might not know, and HN's decision (which is, on balance, I think beneficial) to not allow additional supporting commentary besides the title on posts with links to outside articles prevents contextualizing this well for HN readers. (Perhaps allowing one or a small number of supporting links with very brief annotations might be an improvement, but we really do want to avoid Slashdot-style editorializing of submissions, which the current setup does quite efficiently.)
> JupyterLab is an interactive development environment for working with notebooks, code, and data.
On the other hand, if you use Python, you should definitely check out Jupyter notebooks (formerly IPython notebooks, and now JupyterLab, I guess). They're useful when prototyping data pipelines, since the state of the interpreter is saved, letting you iterate on ideas and see the outputs quickly.
At some point I need to drop down to the terminal to run something. I run commands in the terminal I collect some results or collect some info and go back to my notebook to resume my work inside it.
Later I need to look up something in a text file. I open a certain text file. Browse to a certain line number, read that line, maybe edit the text file, and close the text file to go back to my notebook.
Does JupyterLab keep a record of the point in my progress in the notebook when I switched to the terminal or the text file, what I ran in the terminal, and what info was used? If I edited the text file, what was before and after of the text file? In other words, does JupyterLab help with the chronology of workflow events?
If not, I don't see how this is anything other than hundreds of "IDE"s out there.
Notebook format has its own issues, but going back to IDE is not a solution. Offering both notebook and an IDE at the same time and leaving it up to the user to make the best of the combo is not a solution either, unless the offering helps some kind of a way of eliminating the cons of either format.
However, you can run the shell commands straight from a notebook cell (use the %%bash cell magic or prepend the line with !).
Not sure what to do about editing data files. If you can do this with something like awk, just use a shell magic cell, but if it needs to be done manually I guess you’re stuck manually documenting this in markdown?
Point being, offering an IDE in 2018 is not interesting unless you added something "smart" to the IDE that makes the life of the engineer/scientist easier compared to the rest. Otherwise, IDE's are being developed for the last three decades or more.
As for graphics updating on file-change event, that can probably be supported through an extension. This is similar to how certain LaTeX editors automatically re-render the doc on a change-event.
Do I read this correctly as hinting that Jupyter Notebook is being replaced by an IDE?
The "Jupyter Notebook" web application (i.e., the browser application that was originally released in 2011) will eventually be replaced with JupyterLab.
So it extends Jupyter Notebooks by giving you new IDE-like features, whilst retaining the ability to write notebooks.
Guess so :-)
This especially includes things like "Table of Contents", "Variable Inspector", "Ruler", and "Execute Time". How easy will it be to have all of this functionality in the JupyterLab notebooks? There's certainly advantages to having data/terminals/notebooks in an IDE-style layout, but for the moment it would still be two steps back, one step forward for me personally. This is to disparage the effort, JupyterLab clearly is the future!
It will take some time to port all the existing extensions, but the good news is that JupyterLab has been thought to work with extensions (actually everything in JupyterLab is an extension with no privileged component), so it will be easier to write these for JupyterLab than for current notebook.
The documentation on writing extension is also way better than for Classic notebook, and we had new contributor writing extension in 2 to 3 hours.
So we encourage you to try and send us feedback !
This was problematic, especially for me, as I open documentation on other side of the window and keep resizing the window as part of my habit. But, overall JupyterLab was great. You can work on the same notebook side by side too and has a file manager/viewer panel.
See https://github.com/jupyterlab/jupyterlab/pull/3805 and https://github.com/jupyterlab/jupyterlab/pull/3802 for more details.
It would be great to have community plugins that make R-Studio able to open Jupyter files, and JupyterLab open R-Studio files !
how does it fit into a developer workflow, or do i need a different mindset?
what should I try to do with this beta to get my mind right is probably the best question
To be honest, if you're not doing exploration or quick prototyping work (you don't have to be a data scientist though), Jupyter might not be that useful to you.
Jupyter is really useful when you have intermediate results that you don't want to keep regenerating. It lets you test different ideas at any given point in the program without re-running everything above it -- kind of like a pause button. (garden of forking paths) And if you do have to change any code, you can change things in-situ without re-running the entire program. It's like programming with a tape-recorder with mutable state.... hmm, ok maybe that isn't a good analogy, but close enough.
For quick scripts, I reach for vim and run my code on the console, and insert "import ipdb;ipdb.set_trace()" wherever I need breakpoints.
For more complex work where there are different permutations, and many throwaway branches of ideas that I have to test, Jupyter (or any notebook type tool) is way more useful.
Imagine a power point presentation but with code samples which can execute inside the slide without ever leaving the presentation.
I thought it was pretty nifty. I don't know how many programming languages it supports but seems like a good training tool. When I was at university (back in 2003) the lecturer would constantly switch from their powerpoint slides to Emacs in order to demonstrate code this seemed like a more streamlined version of that.
Note that almost all the rendering is - for JupyterLab – client side, and that for scaling we know of Single JupyterHub deployments that have close to 5k users. Horizontal scaling of Hubs is improving, and we hope to have more robust solution soon.
It will prompt you for what sort of kernel to use, including the ability to use any currently running session.
I use Jupyter notebooks with Python all the time and only recently started using them with Node.js it seems like JS is just killing it...
The majority of that was built with typescript.
But, the reason it is so useful is because it can run Python.
Switched to Jupyter Lab recently and I can only recommend it. It's an absolute joy to work (I especially like the full screen mode - really great). Even for simple tasks where I used to open file in Excel by default (when I just need to take a look or do very simple operations), I now prefer the Jupyter Lab experience.
Anyway, thanks for the excellent work !
If you want more then that, someone (maybe you) will build an extension at some point.
I'm hoping these extensions will be available in JupyterLab as well
Thanks for building jupyter by the way. It's an essential part of my workflow.
You can either output text (including HTML/Markdown) from a code cell, or (at least for python, don't know of similar for other languages) use the Python Markdown notebook extension to do this.
One thing I couldn't seem to figure out is if it is possible to plot interactive matplotlib plots (for getting mouseover values zooming etc).
I'm not a fan of pip etc and prefer to isolate my notebooks in a docker container.
EDIT: never mind, you can run it using the official images by executing `start.sh jupyter lab`
- I want to use my text editor to write any non-trivial function/class implementations.
- I want any substantial amount of code to be held and version-controlled in regular files of code, not inside JSON.
- I want to use the notebook for display (tables, figures, rendered markdown/LaTeX, etc)
So the most important question is:
- How do I conveniently work on a code file in my text editor, and then execute code in the notebook so that the most recent variable definitions in the code file are honored during the jupyter execution?
- How do I start a terminal-based REPL/shell that is sharing the same kernel as the notebook? (Relevant to text editors, because this might be for example an ipython shell running inside emacs, allowing me to easily evaluate fragments of code in the text editor.)
There are more sophisticated things one could imagine, but I don't think I want (e.g. evaluate a cell/notebook from the text editor, create a cell from text editor).
I'm vaguely aware of autoreload but it seemed a bit confusing; there are various similar-sounding alternatives.
FYI, if you want to do both things inside of JupyterLab, you can easily start a console in JupyterLab connected to the same kernel as the notebook in three different ways: right-click in the notebook and select "New console for notebook". Or from the notebook, select the main menu File>"New Console For Notebook". Or simply start a console from the File>New menu and choose the notebook's kernel from the dropdown.
>> If you like to try it, pick any of your libraries, right click and select “open in JupyterLab”.
The other kernels and stuff will still work, I've used them some in Lab. Lab is super extensible, I wouldn't be surprised if projects like RStudio get ported just to unify things, but it is not currently part of IPython so I don't think this is relevant to it at present.
RStudio is probably still nicer for working R, but I haven't done any serious analysis in R for a while now.
so that a data scientist can prototype a dashboard on jupyter and then multiple people can use the dashboard ?