Hacker News new | past | comments | ask | show | jobs | submit login
Pyodide: Bringing the scientific Python stack to the browser (hacks.mozilla.org)
390 points by barryvan on April 16, 2019 | hide | past | favorite | 112 comments



Wow, it seems to work really well. After giving it a small test drive, I'm impressed. Solid work from Mozilla, and a pretty informative article too!

That said, between this, iodide and ObservableHQ, I guess I give up. The browser is the new OS.

My question now is - how can we make browser environment to be more like Emacs (bear with me)? My main complaints are:

- Browser ergonomy absolutely sucks, and there's nothing you can do about it. You're at the mercy of each service's UI, which usually means you'll be clicking a lot and maybe sometimes get a keyboard shortcut here and there. Forget about advanced features, or consistency between sites.

- You have near-zero control over your computing environment. Between sandboxing and half the code being obfuscated blob of transpiled JS (the other half of the code is on the server), you can maybe automate the UI a little with userscripts and fix it up with userstyles.

- There's near-zero interoperability. Unless service authors agree on and implement some APIs, things can't talk to each other. Forget about making them talk. Whatever little wiring you can sometimes do (thanks to services like Zapier and IFTTT), you never have control over the process, and it always involves communication through third-party servers, even if all you'd like is to transfer some local data between browser tabs.

If the browser is the next OS, can we make it suck less than desktop OSes in terms of ergonomy / productivity? Desktop OSes already suck in this regard compared to the promise of old Smalltalk and Lisp systems (that's why I live in Emacs most of the time), so all I see is a downwards trend.

Current state of things is fine for casual use and casual users, but if I'm to spend 8+ hours a day doing serious work in a web browser, the browser needs to be better.


Hypothesis: it would be better to live in a world where UI is mostly user-authored, and B2C is mostly an API marketplace.

In other words, imagine that instead of being forced to digest a plurality of static and unforgiving view hierarchies from designers on high, you have with a single, long-lived, personalized UI that is flexible, changes with your needs, and allows you to integrate new data and services alongside or into existing workflows. Think of it as a personal UI agent.

I like this model of software primarily because of how much creative power and agency it gives end users. "B2C" could end in "creator" instead of "consumer". It's unix-y in its composable nature, and more in line with the hopes we had for the web's ability to unlock creativity in its infancy.

It also seems like a step towards a world where we see fewer ads, because there's more of a culture of paying for digital products by usage or subscription (though ad-supported options would remain; I don't see this as a revolution against ad-supported models).

Obviously, lots of challenges to execution, but would love to hear of anyone else thinking along these lines!


> Hypothesis: it would be better to live in a world where UI is mostly user-authored, and B2C is mostly an API marketplace.

I tentatively believe in this hypothesis. That's part of the reason I moved most of my computing into Emacs - I get greater control over the UI there, and reap the benefits of deep interoperability.

I think this would be a working system. You'd have a separate market for services, and a separate market for software consuming these services. I doubt most regular people would become full-blown creators, but they would happily shop for more ergonomic tools and customize them to the limit of their needs. It's kind of similar to the "right to repair", where nobody honestly expects that everyone will be fixing their appliances themselves - the point is that those who can, would, and they would also offer their services to those who can't or don't want.

I'm not sure how to get to this world. That's kind of the question in my original post - I see fixing the browser for productive use to be a potential stepping stone involving the same or similar changes.

I have one idea I'm hesitant about: it would be nice if we could force all services on the Internet to communicate with open and documented protocols and APIs, while simultaneously banning any kind of "you can only use our official app" clauses in ToS. That is, force the decoupling between a service and client software that consumes it.


I like the idea, but I think most users never would/could author their own UI's, and many who could would only want to a small percentage of the time.... but maybe your idea could work if there was essentially an API marketplace AND a UI marketplace, and services competed on both of them seperately... v1 from a service would (probably) always offer both to be usable but if it published some contract for how they talk to each other so they were decoupled and anyone could author a new UI (and, I suppose if it were a marketplace, sell it, but maybe given constraints only selling the API is realistic and UI is more FOSS/enthusiast driven/fork based?) and make it discoverable as an alternative to try out.


I think having a distinct separation between back-ends and swappable front-ends would be a good thing, but by this point, it would require government intervention to make it happen. Twitter, Facebook, etc. froze out third-party clients to protect their advertising revenue. They're not going to let third-party clients back in unless they're forced to, and such a move by the government would be tantamount to declaring war on the advertising industry, so, it's not likely to happen.


I agree with your points here, and I think two markets - service API market and client software market - would be a great thing to have. One of the things that makes a lot of services suck (social media platforms in particular) is the control they have on APIs they provide. To pick one example, I can sort of use the Facebook API for some of the things, after jumping through enough contractual hoops, but I can't really go and build a Facebook client that would have feature parity with the official one. Not without risking my account getting banned. I especially couldn't distribute such a client either.


To do this we need to build technologies that can work with information and meaning in code in addition to data. Users think in terms of information and meaning much more than in terms of data. These technologies would allow us to build code in the language that users use. This will give us the right tools to build meaningful user interfaces easily. In other words, we need the right tools for the job. I am building one of the many tools needed for this. It is called the endeme. It is a library for c# that can be found here: https://github.com/jonrgroverlib/InfoLib. Discussion can be found here: https://thisstack.wordpress.com.


I think it's impossible. We lost. The incentives aren't there for interoperability and support for client-side customization to meaningfully improve - big names will resist, either because it makes it easier for users to sidestep ads and tracking, or because the kinds of features it would require aren't within reach of the minimum viable user (https://www.reddit.com/r/dredmorbius/comments/69wk8y/the_tyr...).

Legally-enforced open APIs and client-agnosticity like you mention downthread sounds wonderful (maybe we could start by deprecating the User-Agent header) but for similar reasons, I can't see it ever taking off.

Pessimism notwithstanding, here's an idea: imagine a browser extension where you could implement privileged pages/tabs that had access to other tabs' script environments/DOM, and could inject elements from other pages into itself. So you could write a workflow like:

1. open a Wikipedia page in a background tab (`const wiki = new Tab(etc...);`)

2. run some JS to pick out values from a <table> in it

3. open a Google Docs spreadsheet in a background tab

4. fill out the spreadsheet using the previously obtained data

5. as a bonus, inject the table + spreadsheet elements into the extension page for side-by-side reference

Not sure what the emacs analogy would be - maybe something like org-mode and Babel being able to generate things using the contents of other buffers.


> The incentives aren't there for interoperability and support for client-side customization to meaningfully improve - big names will resist, either because it makes it easier for users to sidestep ads and tracking, or because the kinds of features it would require aren't within reach of the minimum viable user

Yup. Also, because of security - that, I imagine, will be the argument legitimizing the whole thing. I'm acutely aware how interoperability conflicts with security. Every security compromise you make to make the software be more useful is something that can be a way for users to self-pwn when subjected to social engineering attacks. My Emacs is a monster of productivity, but then again it's so niche that nobody is writing elisp malware.

> Not sure what the emacs analogy would be - maybe something like org-mode and Babel being able to generate things using the contents of other buffers.

Your example is actually something one would do in Emacs - (1) open a buffer with some data in the background, (2) copy the interesting parts, (3) paste it into a new buffer in some specific major mode that helps you (4) restructure the data, and (5) show the work; all in a single step you could bind to a key or button.

Different example of Emacs-style integration I'd like to do: run all my IMs in the browser (FB Messenger, Slack, Skype, Telegram) as background tabs, and have a resident piece of code that reacts to new messages in each and aggregates them, displaying a foreground tab with "event stream" ordered by time (and possibly a "reply" button). Kind of like IRC channel unifying all your browser IMs, with ability for you to add new ones without having to deal with OAuth and API calls to IM servers.

Elsewhere in the thread[0], 'zapzupnz says that "data can be available online without necessarily being web first. Heck, most things are — data provided to mobile apps and desktop clients via services that might have a web frontend.". Well, so how come I can't easily do automated queries to my bank to fetch my balance, put it in whatever accounting app I want to use, and display it on my personal dashboard for extra fun? The only way I can do this today is through scripting around bank's authentication and scrapping their page - which risks termination of account for ToS violations. This is less about the browsers and more about API control and problematic ToS - but it would be mighty easier to do if I could easily shuttle arbitrary data between browser tabs myself.

BTW. this thread made me realize that I'm not very clear on what a browser extension can or can't do; maybe the situation is slightly better than I think. Need to check it out.

--

[0] - https://news.ycombinator.com/item?id=19680964


Puppeteer?


These are some good points.

One small way in which Iodide advances this is that by having the editor being a single text editing widget (rather than multiple cells as you see in Jupyter and others), it should be easier to replace that widget with an alternative editor, or (using an extension) link to a native editor on the machine.

But all of these other issues, I agree, are things that would be nice to improve upon. I think in part this comes from so much of the "productivity on the web" stack is dominated by big players (Google Docs, Microsoft Office 365) there hasn't been a big push for interoperability and customizability. I'd love to see a movement around that (but definitely out-of-scope for what the Pyodide team can currently take on!)


Sure, I'm not saying Pyodide team should take that on! You've already done an awesome job, and the whining in my comments isn't aimed at you - it's aimed at the direction computing is heading, of which your project is but a symptom. Right now there isn't much you could do; the browser platform isn't geared towards this kind of user-driven interop.


> If I'm to spend 8+ hours a day doing serious work in a web browser, the browser needs to be better.

Agreed! Jupyter notebooks are great, but I miss my Vim keybindings when editing code in cells.

(Plugins like Vimperator always collide with some native keybindings.)


If you run EIN (Emacs Ipython Notebook module) in emacs with vim keybindings (via spacemacs or evil mode, or some other distribution/mode) you can get some pretty great results. Would definitely look into it if you're either already using emacs or willing to try it out.


I tried it at some point, but a minor inconvenience was that it kept adding superfluous metadata to the cells I was editing.

Since some of the notebooks I work on are collaborative and versioned with git, this was painful for reading diffs (even more so with the ipynb format) so I dropped it to my regret.


It's far from perfect, but I use jupyterlab with the jupyterlab-vim extension for this: https://github.com/jwkvam/jupyterlab-vim


Not just keybindings. I don't know how your Vim setup looks, but my Emacs setup gives me some extra "non-default" tools that are consistent across the whole program, whether I'm editing Lisp, JS or managing files in a directory. Two that I use daily are semantic select (you press a key to expand your selection to the nearest encompassing semantic unit) and multiple cursors.

I did spend several days in a row typing code 10+ hours a day in Observable notebook (I had some idea I needed to validate and demonstrate to other people in the company). While the tool itself was amazing, the coding aspect was not.


qutebrowser had a keybinding for opening things in vim IIRC


Fully valid points. And I see no reason why we should accept the urge to use the browser for many tasks which are way better accomplished with existing solutions. After all, aren't we increasing the complexity while limiting functionality and ease of use?

Sure, if your core business is in the Cloud and browser market, this may look different, but I believe the terminal will still continue to serve the scientific computing community well…


I'm getting tired of fighting an uphill battle here. Web technologies have overwhelming dev mindshare; all the cool things happen either as web pages or Electron apps now (which are even worse than web pages, because raw browsers we can upfix a bit with userscripts and plugins).

> terminal will still continue to serve the scientific computing community well

I hope so, but note how data is on the web, the output is expected to be on the web (so others can consume it), collaboration is expected to be on the web, and now you can code things on the web... Everything is moving into the browser.


> the output is expected to be on the web

That's no more true now than it has been for the past 10 to 15 years. However, output is just one view; something that automatically generates HTML to display things, like scientific data, is great, but that's far from the only, or even primary, way that many things are consumed from the internet.

Other views exist that can visualise the same data in other ways. That is, data can be available online without necessarily being web first. Heck, most things are — data provided to mobile apps and desktop clients via services that might have a web frontend.

My personal thinking is that people really are too hooked on this narrative about the browser as an OS, and maybe have gotten tunnel vision from it.

The internet, as it is today, is still mainly services. Many things may be more and more accessible in the web by preference, yet remain freely available and consumable through other forms (REST APIs, direct database access, etc) and delivered through apps and integrations within operating systems. Some examples might include:

- Dropbox: files sync with the various apps and utilities - iCloud: mail, contact, calendar are mostly consumed by the native apps. Pages, Keynote, Numbers documents are mostly edited in the Mac apps. - Office 365: the Windows apps remain the dominant way to access and edit Word, Excel, and PowerPoint documents. - Github: the most popular source code repository today, but still very much consumed at the command line. The website is strongest at wiki editing, forums, and so on, but the bulk of code moving back and forth done with the `git` command or IDE integration.

Let us not underestimate how much of the internet is not primarily consumed in web browsers and likely never will be through the vast majority of certain markets (for instance, developers), where internet-located data will be consumed as services that interact with apps much more than as web sites.


> That's no more true now than it has been for the past 10 to 15 years.

It is, though. 10 years ago I'd expect to get a PDF or an .XLS or a bundle of Matlab code. Maybe a static page. Today, if you can't interactively explore the data in the browser, it's considered subpar.

> The internet, as it is today, is still mainly services. Many things may be more and more accessible in the web by preference, yet remain freely available and consumable through other forms (REST APIs, direct database access, etc) and delivered through apps and integrations within operating systems

Disagree. The Internet may be mostly services, but they're services with default UIs you're forced to use, and are not freely consumable through other forms. REST APIs are restricted both in terms of features and what ToS allows you, and more often than not you couldn't build an alternative UI with feature parity to the original one.

> Dropbox: files sync with the various apps and utilities

Sorta, kinda. Can I have an alternative implementation of the Dropbox client? The problem isn't bad here though - Dropbox does one thing and does it well, i.e. syncing files with real OS-es and their filesystems. You can work with that to the extent your OS lets you.

> Pages, Keynote, Numbers ... Office 365

All being increasingly replaced by Google Docs, because it's free and you already have an account. You can't edit those outside the browser.

> Github

Exception, not the rule, and it's an artifact of the fact that developers still mostly work on desktop OSes with real filesystems. I'm worried about the future in which we'll all be using some future evolution of VS Code in the browser, communicating with future Github in the background over some APIs you can't hook into.

I'm not saying everything is in the browser now. But it sure as hell looks like in 10 years it'll all be.


> Today, if you can't interactively explore the data in the browser, it's considered subpar.

PDFs still remain a primary form of interchange for academic documentation. People who really want to crunch the data will want raw data files or some sort of common interchange format.

> but they're services with default UIs you're forced to use

I gave a bunch of examples where you're not forced to use them. Most of the popular web services are also available through apps on mobile.

> REST APIs are restricted both in terms of features and what ToS allows you

For public REST APIs, sure. For private use, it's still things like REST and whatnot that power desktop and mobile apps, bringing data from those internet-based services to local clients, completely bypassing the web.

> Can I have an alternative implementation of the Dropbox client?

You can, but that isn't the point. The point is that the Dropbox client, the executable program or mobile app, is how people tend to use Dropbox. If not that, people also connect to Dropbox through alternative third party apps.

Completely possibly, and I wager mostly the case, to bypass the web.

> All being increasingly replaced by Google Docs

Citation needed.

> You can't edit those outside the browser

You can, using Google's apps. They're not fantastic, but that's not the app's fault, that's Google's. Meanwhile, iWork and Office 365 let one easily edit documents with the desktop and mobile apps — and Office remains king of the hill in enterprise, whatever people may wish to believe.

> Exception, not the rule

But still highly relevant.

> I'm not saying everything is in the browser now. But it sure as hell looks like in 10 years it'll all be

It really doesn't. There's no pattern from which to base this. Apps are far more popular, especially thanks to the rise of smartphones; the web remains a convoluted mess of inconsistent looks and feels that increasingly assume a high-quality broadband connection even though there are still vast portions of the developed world that still don't have adequate internet to completely shift away from local apps; and plenty of enterprises still run on older versions of software and standards.

When one examines the whole picture, the web is far from being as ubiquitous as you imagine it to be or shall be — it's merely the next biggest alternative that happens to be huge.


> all the cool things happen either as web pages or Electron apps now

There are also plenty of cool things on mobile OSes.


Most of which are webpages in a webview these days; mobile software is also much more skewed towards casual use due to device form factor. Here I'm focusing on more professional or even prosumer use cases.


Not really, plenty of native code as well.

Also both iOS and Android have very interesting architecture features, still not widespread on desktop OSes.


> Also both iOS and Android have very interesting architecture features, still not widespread on desktop OSes.

Most of those are heavily favouring security at the expense of interoperability and user control. Not exactly the direction I'd like things to see heading on the desktop. Compared to laptops and PCs, mobile devices are essentially interactive TVs.


From architecture point of view there are plenty of interesting things to explore beyond security.

But yeah, I want to see sandboxes everywhere and also enjoy the direction that OS X and Windows are going into that regard.


> the other half of the code is on the server

Somewhat of a side note, but the same holds true for many equivalent native apps as well.

> There's near-zero interoperability.

This is the problem that Tim Berners-Lee's SOLID is trying to solve: https://solid.inrupt.com/


Well, I guess we could built up on this old idea.

https://www.lively-kernel.org/


Interesting, but that's still something to be used within a web application. Here, I'm talking about fixing up the browser. I don't see how I could use Lively Kernel to e.g. force consistent advanced autocomplete UI on every text field in every tab, or to shuttle data between two tabs harbouring applications that were not explicitly designed to talk to each other.


The browser developers have some hard constraints that prevent them from making the browser better for desktop applications. There were a lot of battles at Google over making ssh work in the browser while supporting standard terminal keyboard shortcuts. Even today I hate working in Jupyter because I can't alt-tab-navigate the tabs (it switches browser tabs).

Everything you described is an expected consequence of inner platform syndrome. It would have IMO been better if libraries like qt made cross-platform, network-driven desktop application development the preferred software delivery mechanism.


It's great to see how far WebAssembly and Emscripten has come and this is a really cool app!

> If you haven’t already tried Pyodide in action, go try it now! (50MB download)

I wonder though how ready is it for production usage. At Repl.it, years ago[1], we moved away from browser-based execution to the cloud because it excluded many users who don't have the client-side firepower to download/parse/execute this much JS/WASM.

Repl.it can already run a lot of the examples here [2] but I gotta say the DOM-integration is pretty neat. We could do interactive stuff on Repl.it using our new Graphics infrastructure (GFX)[3] but there is always the roundtrip delay. Here is the same matplotlib example running and streamed down on X11: https://repl.it/@amasad/matplotlib

[1]: https://news.ycombinator.com/item?id=16578943

[2]: https://repl.it/@amasad/pyodide-example

[3]: https://repl.it/blog/gfx


Browser vs server-side is always going to be a tradeoff, yeah. In some cases you're ok with paying for server time, and then don't need to depend on clients running your code. But in other cases it's much more cost-effective to run code on the client.

Note that the browser has gotten a lot better in the years since repl.it moved away from running code there. In particular wasm parses and executes a lot faster and takes a lot less memory than JS.


We're overdue on toying around and reevaluating wasm. It would be particularly great if we can make the decision on a per-use-case basis. I.e. fallback to cloud in underpowered devices or instances where it makes more sense.


That is no doubt great tool. But imagine even one guy like me has manny gpu card for trying some large memory ai model (not even production). It just can’t work on the cloud during development.

What the appeal of this is Standardise on certain way to do things and we can share with others. So far all python and r notebook. Is browser an option?


Browsers don't expose access to GPU compute APIs or multiple GPUs. Any web GPU usage you see right now is limited to shader programs on a single GPU.


Tensorflow.js can actually use WebGL to accelerate machine learning. Have not tried it though.


Tensorflow.js is a example of using the fragment shaders I was talking about:

"Tensors are stored as WebGL textures and mathematical operations are implemented in WebGL shaders." https://www.tensorflow.org/js/guide/platform_environment


I'm curious what the security implications would be of running client side vs. server side for something like REPL.it.


We spend inordinate amount of time and money on security. We didn't when we ran clientside


There is one area where "computing in the browser" misses the mark a little - browser interfaces (like Jupyter) are often used not just for the convenience they offer, but also because they serve as thin clients, which provision vastly better computing resources than you have locally.

So while I do some of my coding in the browser, the code executed is an EC2 far far away, close to my data, with excellent networking etc. There is very little that a Python stack directly in my browser would offer, to me (!) at least (your mileage may vary).


Not just that, but a lot of organization use the setup to give good starting interface to data scientist who might know nothing about how to set up all the python packages...


A data scientist wo knows nothing about how to setup the tools he or she uses daily?

Tell me more.


Well I have some friends doing genetics and their infrastructure is maintained by IT team. For them the whole programming experience is connecting to jupyter server running on pre-configured server. I think there are even some online services offering this kind of configuration for teams.


Yes, it does happen. More often than I think is healthy.


Incredible undertaking. Looking forward to play with it for simple machine learning tasks. I think that loading packages directly from PyPI will be a huge milestone.


Would be curious to see if a version of Cython could be made to work with this. Much of my data analysis is me passing numpy arrays into C functions which Cython helps a lot with. So I would be looking for a version of Cython that would convert that to webassembly.


Cython works for ahead-of-time compilation. (Pandas requires it, for example). Making it work in the browser would also mean putting a C compiler there, which people have done. I have no idea how well that would all hold together, though.


Ideally Cython would allow to compile directly to Webassembly (and maybe using Pyodine converters), without going through C and requiring a C compiler also.

Of course that is a huge job and might very well never happen.


Web assembly?


isn't it already doing that, converting the C generated by Cython to JS with emscripten? otherwise a huge amount of numerical libs wouldn't work. or did you mean directly without passing by C


I think you're mistaking Cython (Python extended with C - https://cython.org/) and CPython (the canonical Python implementation, written in C)


I don't think I am


I'm not sure, I didn't see Cython in the list of supported libraries. But if it's doing what you described yes that would be awesome.


I want to believe so bad. Being able to do web development with Python instead of JavaScript has been something I've dreamed about forever.


I find it ridiculous that javascript is the only language available for the web. Almost every other platform, from bare metal to Excel, support multiple languages. It's well past due to untie the browser from a mandated language.


Well, that's largely the problem that WebAssembly is aiming to solve. Progress has been significant.


You've been living under a rock :)

People are using a variety languages that compile to JS on the frontend for many years. Elm, TypeScript, ClojureScript, Reason, Scala.js, Fable, etc. In fact even modern JS is often compiled into backward compatible legacy-JS these days.

WebAssembly is porting the C machine model to the web, so for the foreseeable future JS is still going to be a better runtime for most GCd languages.


Why not just use Django or Flask? Python web dev is great because you do have access to so many of these incredible scientific libraries like Pandas, etc.


I don't know anything about web dev, so maybe it's a stupid question, but you still need javascript for the front end, right? You can't build the front and back end development completely in python?


I think they're implying that you use a Python web framework and server-side rendering.


Right now, reasonably you can't. Maybe with Pyodide you may soon be able to.


Those work great for the "back end" but don't cover the aspects of the in-browser "front end" unless you do everything on the server side.


the problem is now you have to rewrite your entire application around the web stack. That's a huge impedence mismatch. Taken to extreme, you end up with Pandas.


Python no, but there's a lot of languages that have a JS transpiler, like ClojureScript, BuckleScript, Fantom, Nim, Haxe, etc. And of course, C and C++ such as what this project did.

Actually, it seems there are some Python transpilers as well, though they don't seem as supported as what I listed.

Anyways, my point is there's quite a lot of options on top of JavaScript. And for web dev, transpilers are the way to go, because bundle size matters for page loads, so options like this, where you compile the whole interpreter into a bundle aren't valid for that. For example, I use ClojureScript for web dev, and it works great. There are options to escape the shackles of having to write JS.


There's Brython. https://brython.info/

I worked with a researcher a few years ago who used it to do somewhat complex frontend presentations.

But why? Just learn a little javascript. Its fun.


It’s never learn a little JavaScript.

It’s learn webpack, learn parcel, learn react, learn vue, learn typescript, learn redux, learn npm, learn yarn etc etc.

If it was python, at least it wouldn’t change every three months.


Python has spoiled every other programming language for me. I look at their syntax and it makes me sad.


It was available as browser plugin since the early days, it just went nowhere.

If I remember correctly, ActiveState was the one offering it.


> It’s also been argued more generally that Python not running in the browser represents an existential threat to the language

I do not believe that.


Is anyone using this yet? What’s your experience been?


I've used it for a couple small projects in the context of trying Iodide.

It's pretty good for data science—I can't speak to DOM manipulation that much, I've only done very minimal work with it for that. My two cents: it's bit rough around the edges—the performance is noticeably worse, and for data science, with medium-sized datasets it can freeze a tab for 10-15 seconds. Sometimes you get an odd edge case though where standard Python cannot run perfectly due to some combination of operations.

That said, overall, I'm really excited by where it's going, and I've always been able to work around the limitations.


Can this be used to create a desktop application using Electron?


That would be an interesting stack if you could write desktop apps using Python but use the DOM/Browser stuff for the UI layer.


Yeah I often run into the need to develop basic UIs for data analysis for people with limited programming experience. Currently I have to settle for using Jupyter notebooks with embedded ipywidgets, but it's not still not very user-friendly. And the desktop app libraries like PyQt5 are overkill for what I need, and most of the visualization libraries I use are designed for jupyter notebooks/web e.g. Bokeh.

If I could just port my jupyter notebook+widgets+plots into an Electron window and ship it to people that'd be awesome.


Maybe check out Dash by Plotly. It's helped me address that exact situation as well.


A couple of years ago I started to learn Python and made something like this. Desktop applications with Flask and PyQtWebkit.

https://github.com/smoqadam/PyFladesk

and this is the sample app:

https://github.com/smoqadam/PyFladesk-rss-reader



Please don't, Qt for Python is so much better option.

https://www.qt.io/qt-for-python


Looks really nice!

I prefer the look of the JSMD format to the Jupyter notebook format and hope that is something that might get integrated into JupyterHub as well.

Question: is there some documentation for deploying your own Iodide server behind a firewall? The original Iodide post mentions this is possible.


It's a cool project technically, but I don't understand the use case?

I mean surely everyone can just have real python/jupyterlab - if you run it in Docker it's easy to handle the dependencies etc.

Am I missing something?


I gave a keynote at PyCon last year that touched on this: https://www.youtube.com/watch?v=ITksU31c1WY#t=32

For me, the most important aspect of Pyodide is that it allows the scientific Python stack to go everywhere the Web goes. And the Web goes everywhere.

That means I can share a link to a Pyodide notebook, and the recipient gets a fully interactive experience, no installation required. And that's really important for the generation that's getting their start in computing through classroom iPads, Chromebooks, etc. which do not easily allow for "real" Python.


As someone not familiar with any of this: is that not exactly what Jupyter Lab does? I understood that the main difference between Pyodide and that is that Pyodide allows you to also use e.g. D3 for rendering?


A big difference is where Python runs. With Pyodide, everything runs entirely in your browser, and the server side can be completely static. With Jupyter, you need a separate copy of Python running on a server somewhere to actually perform any computation.


Wouldn't it be more straightforward then to compile Python to WASM, and combine the resulting binary with Jupyter so everything can run in the browser?


Compiling Python to WASM was far from straightforward. That's essentially all Pyodide is, and why it is impressive. Presumably no one has taken the time to combine it with Jupyter since it is so brand spanking new.


Ah, thanks for the clarification, I did not know that.


interoperability between the js and python ecosystem. This is pretty much a holy grail. By interoperability, I mean python functions can render to the browser and your javascript functions can include a python library directly.

have a single runtime (js) execute both your front end and backend code.


Sounds like you could use it for interactive visualizations in the browser, similar to D3


I'm looking forward to try out Pyodide in Electron so that I can make use of all the great Python scientific and signal processing libraries for data processing.


lol what a crazy rubegoldberg machine - like the person beneath asks: why not just use "native" python


PWAs, running in a sandboxed/secured environment where installing native python isn't an option, bolting existing python code bases onto existing electron code bases. There are plenty of use cases. Just because you haven't encountered one doesn't mean there aren't legitimate applications. It's hard/impossible to judge someone's tech stack without understand the constraints they are operating under.


Exactly. Otherwise you'd need some other way to communicate between Electron and Python. This is possible, you could use ZeroMQ. But it would be a huge pain.

This would easily make sense where you want to execute your app's logic in Python, but want to keep the UI in HTML and JS.


Definitely. And a lot of the motivation for that type of approach is driven by team skill portfolios rather than raw technical merit. If you have a data science team specialized in python and a front end team specialized in JS (which I think we can agree describes the overwhelming majority of DS and FE teams), there's a really strong organizational motivation to bolt python and JS together in flexible ways that match the available deployment infrastructure, which, like it or not, includes electron.


Why wouldn't you just use regular Python?


My goal would primarily be to avoid having to make sure there's an active python installation with all the right dependencies set up on a client's computer. That's been a huge problem for me in the past during deployment (esp. older windows OS's).

A nice side benefit I foresee would be easier coordination of state between a JS front end and a Python process running specialized computations. The current best approach I'm aware of now involves sending data back and forth as JSON and trying to mirror what's going in Python with JS state management. The system described in this article where JS and Python can reference the same objects (and somewhat the same typing?) sounds like it could be an improvement.


You can bundle the whole Python distribution with your application and it will still be smaller than Electron. Recent releases of Python have an embeddable distribution for this purpose: https://docs.python.org/3.7/using/windows.html#windows-embed...

This is also the way Java is supposed to be deployed on client machines nowadays, apparently - using the jlink tool to create a bundle of the Java runtime for your application.


I wonder if that could be accomplished with something like PyInstaller?

I guess if you're using JS features too then there might be other benefits.


Can it run cuda? Otherwise not that practical.

Minor interest. Can this approach run j/k or more important lisp?


Most practical python programs do not use CUDA, even scientific ones.


Says the mkl user.

(Sorry, couldn't resist).


Give browsers access to CUDA and the bitcoin mining malware will drain your battery faster than you can hit the back button.


Would disabling the bitwise operations be an effective way to deal with this?


You can do GPGPU stuff via WebGL, right?


WebGL doesn't have compute shaders. There are (gross) hacks that "emulate" gpu compute on top of stock shaders but they're limited and can't possibly compete with CUDA.


Compute shaders are available on chrome on windows if you start chrome with some flags. Here are some compute shader demos: https://github.com/9ballsyndrome/WebGL_Compute_shader

And here is a draft spec for WebGL2 Compute: https://www.khronos.org/registry/webgl/specs/latest/2.0-comp...

Only a matter of time until this will be available in chrome and firefox by default. Probably never in Safari though, since it doesn't even support WebGL 2.


I wonder why they are talking about "WebGL 2 compute" instead of a new WebGL version based on GLES 3.1. WebGL 2 is based on OpenGL ES 3.0, and the major feature of OpenGL ES 3.1 was compute shaders.

Apparently one snag in all this is that Apple's OpenGL version is stuck in a time before compute shaders, and all Mac/iOS browsers currently implement WebGL on top of OpenGL. (ANGLE doesn't support Metal).


People have been doing GPU computation for a long time before the gl "compute shader" feature. There's nothing emulationy about it. The shading language (GLSL) and available data types are the same in "compute shaders" and the familiar fragment/vertex shaders.

Compute shaders are just a shader type in OpenGL (and possibly a future version of WebGL) that have some convenient properties, for example they can more easily run out of step with the rendering pipeling if you have an application wanting to mix OpenGL/webGL graphics and non-graphics compute concurrently. See eg https://www.khronos.org/opengl/wiki/Compute_Shader#Dispatch



I don't know the details but the WebGL backend will have severe limitations because it doesn't use compute shaders or CUDA. This means that certain functionality like random writes to arbitrary buffers can only be emulated through workarounds that are like an order of magnitude slower than using compute shaders, and some things are going to be impossible entirely. There is something about CUDA in the link you provided but since that isn't natively supported by browser either, it will require the user to install something or use a server backend where communication between server and client is going to be an excruciatingly slow bottleneck.

So no, this isn't a web thing and not usefull for web apps/web pages, except maybe for a very limited and small set of use cases.


When I last benchmarked tensorflow.js, it was 40 times slower than native tensorflow on my laptop. However, even if WebGL compute shaders were available, Nvidia's cuBLAS and cuDNN libraries would still be faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: