That said, between this, iodide and ObservableHQ, I guess I give up. The browser is the new OS.
My question now is - how can we make browser environment to be more like Emacs (bear with me)? My main complaints are:
- Browser ergonomy absolutely sucks, and there's nothing you can do about it. You're at the mercy of each service's UI, which usually means you'll be clicking a lot and maybe sometimes get a keyboard shortcut here and there. Forget about advanced features, or consistency between sites.
- You have near-zero control over your computing environment. Between sandboxing and half the code being obfuscated blob of transpiled JS (the other half of the code is on the server), you can maybe automate the UI a little with userscripts and fix it up with userstyles.
- There's near-zero interoperability. Unless service authors agree on and implement some APIs, things can't talk to each other. Forget about making them talk. Whatever little wiring you can sometimes do (thanks to services like Zapier and IFTTT), you never have control over the process, and it always involves communication through third-party servers, even if all you'd like is to transfer some local data between browser tabs.
If the browser is the next OS, can we make it suck less than desktop OSes in terms of ergonomy / productivity? Desktop OSes already suck in this regard compared to the promise of old Smalltalk and Lisp systems (that's why I live in Emacs most of the time), so all I see is a downwards trend.
Current state of things is fine for casual use and casual users, but if I'm to spend 8+ hours a day doing serious work in a web browser, the browser needs to be better.
In other words, imagine that instead of being forced to digest a plurality of static and unforgiving view hierarchies from designers on high, you have with a single, long-lived, personalized UI that is flexible, changes with your needs, and allows you to integrate new data and services alongside or into existing workflows. Think of it as a personal UI agent.
I like this model of software primarily because of how much creative power and agency it gives end users. "B2C" could end in "creator" instead of "consumer". It's unix-y in its composable nature, and more in line with the hopes we had for the web's ability to unlock creativity in its infancy.
It also seems like a step towards a world where we see fewer ads, because there's more of a culture of paying for digital products by usage or subscription (though ad-supported options would remain; I don't see this as a revolution against ad-supported models).
Obviously, lots of challenges to execution, but would love to hear of anyone else thinking along these lines!
I tentatively believe in this hypothesis. That's part of the reason I moved most of my computing into Emacs - I get greater control over the UI there, and reap the benefits of deep interoperability.
I think this would be a working system. You'd have a separate market for services, and a separate market for software consuming these services. I doubt most regular people would become full-blown creators, but they would happily shop for more ergonomic tools and customize them to the limit of their needs. It's kind of similar to the "right to repair", where nobody honestly expects that everyone will be fixing their appliances themselves - the point is that those who can, would, and they would also offer their services to those who can't or don't want.
I'm not sure how to get to this world. That's kind of the question in my original post - I see fixing the browser for productive use to be a potential stepping stone involving the same or similar changes.
I have one idea I'm hesitant about: it would be nice if we could force all services on the Internet to communicate with open and documented protocols and APIs, while simultaneously banning any kind of "you can only use our official app" clauses in ToS. That is, force the decoupling between a service and client software that consumes it.
Legally-enforced open APIs and client-agnosticity like you mention downthread sounds wonderful (maybe we could start by deprecating the User-Agent header) but for similar reasons, I can't see it ever taking off.
Pessimism notwithstanding, here's an idea: imagine a browser extension where you could implement privileged pages/tabs that had access to other tabs' script environments/DOM, and could inject elements from other pages into itself. So you could write a workflow like:
1. open a Wikipedia page in a background tab (`const wiki = new Tab(etc...);`)
2. run some JS to pick out values from a <table> in it
3. open a Google Docs spreadsheet in a background tab
4. fill out the spreadsheet using the previously obtained data
5. as a bonus, inject the table + spreadsheet elements into the extension page for side-by-side reference
Not sure what the emacs analogy would be - maybe something like org-mode and Babel being able to generate things using the contents of other buffers.
Yup. Also, because of security - that, I imagine, will be the argument legitimizing the whole thing. I'm acutely aware how interoperability conflicts with security. Every security compromise you make to make the software be more useful is something that can be a way for users to self-pwn when subjected to social engineering attacks. My Emacs is a monster of productivity, but then again it's so niche that nobody is writing elisp malware.
> Not sure what the emacs analogy would be - maybe something like org-mode and Babel being able to generate things using the contents of other buffers.
Your example is actually something one would do in Emacs - (1) open a buffer with some data in the background, (2) copy the interesting parts, (3) paste it into a new buffer in some specific major mode that helps you (4) restructure the data, and (5) show the work; all in a single step you could bind to a key or button.
Different example of Emacs-style integration I'd like to do: run all my IMs in the browser (FB Messenger, Slack, Skype, Telegram) as background tabs, and have a resident piece of code that reacts to new messages in each and aggregates them, displaying a foreground tab with "event stream" ordered by time (and possibly a "reply" button). Kind of like IRC channel unifying all your browser IMs, with ability for you to add new ones without having to deal with OAuth and API calls to IM servers.
Elsewhere in the thread, 'zapzupnz says that "data can be available online without necessarily being web first. Heck, most things are — data provided to mobile apps and desktop clients via services that might have a web frontend.". Well, so how come I can't easily do automated queries to my bank to fetch my balance, put it in whatever accounting app I want to use, and display it on my personal dashboard for extra fun? The only way I can do this today is through scripting around bank's authentication and scrapping their page - which risks termination of account for ToS violations. This is less about the browsers and more about API control and problematic ToS - but it would be mighty easier to do if I could easily shuttle arbitrary data between browser tabs myself.
BTW. this thread made me realize that I'm not very clear on what a browser extension can or can't do; maybe the situation is slightly better than I think. Need to check it out.
 - https://news.ycombinator.com/item?id=19680964
Agreed! Jupyter notebooks are great, but I miss my Vim keybindings when editing code in cells.
(Plugins like Vimperator always collide with some native keybindings.)
Since some of the notebooks I work on are collaborative and versioned with git, this was painful for reading diffs (even more so with the ipynb format) so I dropped it to my regret.
I did spend several days in a row typing code 10+ hours a day in Observable notebook (I had some idea I needed to validate and demonstrate to other people in the company). While the tool itself was amazing, the coding aspect was not.
One small way in which Iodide advances this is that by having the editor being a single text editing widget (rather than multiple cells as you see in Jupyter and others), it should be easier to replace that widget with an alternative editor, or (using an extension) link to a native editor on the machine.
But all of these other issues, I agree, are things that would be nice to improve upon. I think in part this comes from so much of the "productivity on the web" stack is dominated by big players (Google Docs, Microsoft Office 365) there hasn't been a big push for interoperability and customizability. I'd love to see a movement around that (but definitely out-of-scope for what the Pyodide team can currently take on!)
Sure, if your core business is in the Cloud and browser market, this may look different, but I believe the terminal will still continue to serve the scientific computing community well…
> terminal will still continue to serve the scientific computing community well
I hope so, but note how data is on the web, the output is expected to be on the web (so others can consume it), collaboration is expected to be on the web, and now you can code things on the web... Everything is moving into the browser.
That's no more true now than it has been for the past 10 to 15 years. However, output is just one view; something that automatically generates HTML to display things, like scientific data, is great, but that's far from the only, or even primary, way that many things are consumed from the internet.
Other views exist that can visualise the same data in other ways. That is, data can be available online without necessarily being web first. Heck, most things are — data provided to mobile apps and desktop clients via services that might have a web frontend.
My personal thinking is that people really are too hooked on this narrative about the browser as an OS, and maybe have gotten tunnel vision from it.
The internet, as it is today, is still mainly services. Many things may be more and more accessible in the web by preference, yet remain freely available and consumable through other forms (REST APIs, direct database access, etc) and delivered through apps and integrations within operating systems. Some examples might include:
- Dropbox: files sync with the various apps and utilities
- iCloud: mail, contact, calendar are mostly consumed by the native apps. Pages, Keynote, Numbers documents are mostly edited in the Mac apps.
- Office 365: the Windows apps remain the dominant way to access and edit Word, Excel, and PowerPoint documents.
- Github: the most popular source code repository today, but still very much consumed at the command line. The website is strongest at wiki editing, forums, and so on, but the bulk of code moving back and forth done with the `git` command or IDE integration.
Let us not underestimate how much of the internet is not primarily consumed in web browsers and likely never will be through the vast majority of certain markets (for instance, developers), where internet-located data will be consumed as services that interact with apps much more than as web sites.
It is, though. 10 years ago I'd expect to get a PDF or an .XLS or a bundle of Matlab code. Maybe a static page. Today, if you can't interactively explore the data in the browser, it's considered subpar.
> The internet, as it is today, is still mainly services. Many things may be more and more accessible in the web by preference, yet remain freely available and consumable through other forms (REST APIs, direct database access, etc) and delivered through apps and integrations within operating systems
Disagree. The Internet may be mostly services, but they're services with default UIs you're forced to use, and are not freely consumable through other forms. REST APIs are restricted both in terms of features and what ToS allows you, and more often than not you couldn't build an alternative UI with feature parity to the original one.
> Dropbox: files sync with the various apps and utilities
Sorta, kinda. Can I have an alternative implementation of the Dropbox client? The problem isn't bad here though - Dropbox does one thing and does it well, i.e. syncing files with real OS-es and their filesystems. You can work with that to the extent your OS lets you.
> Pages, Keynote, Numbers ... Office 365
All being increasingly replaced by Google Docs, because it's free and you already have an account. You can't edit those outside the browser.
Exception, not the rule, and it's an artifact of the fact that developers still mostly work on desktop OSes with real filesystems. I'm worried about the future in which we'll all be using some future evolution of VS Code in the browser, communicating with future Github in the background over some APIs you can't hook into.
I'm not saying everything is in the browser now. But it sure as hell looks like in 10 years it'll all be.
PDFs still remain a primary form of interchange for academic documentation. People who really want to crunch the data will want raw data files or some sort of common interchange format.
> but they're services with default UIs you're forced to use
I gave a bunch of examples where you're not forced to use them. Most of the popular web services are also available through apps on mobile.
> REST APIs are restricted both in terms of features and what ToS allows you
For public REST APIs, sure. For private use, it's still things like REST and whatnot that power desktop and mobile apps, bringing data from those internet-based services to local clients, completely bypassing the web.
> Can I have an alternative implementation of the Dropbox client?
You can, but that isn't the point. The point is that the Dropbox client, the executable program or mobile app, is how people tend to use Dropbox. If not that, people also connect to Dropbox through alternative third party apps.
Completely possibly, and I wager mostly the case, to bypass the web.
> All being increasingly replaced by Google Docs
> You can't edit those outside the browser
You can, using Google's apps. They're not fantastic, but that's not the app's fault, that's Google's. Meanwhile, iWork and Office 365 let one easily edit documents with the desktop and mobile apps — and Office remains king of the hill in enterprise, whatever people may wish to believe.
> Exception, not the rule
But still highly relevant.
> I'm not saying everything is in the browser now. But it sure as hell looks like in 10 years it'll all be
It really doesn't. There's no pattern from which to base this. Apps are far more popular, especially thanks to the rise of smartphones; the web remains a convoluted mess of inconsistent looks and feels that increasingly assume a high-quality broadband connection even though there are still vast portions of the developed world that still don't have adequate internet to completely shift away from local apps; and plenty of enterprises still run on older versions of software and standards.
When one examines the whole picture, the web is far from being as ubiquitous as you imagine it to be or shall be — it's merely the next biggest alternative that happens to be huge.
There are also plenty of cool things on mobile OSes.
Also both iOS and Android have very interesting architecture features, still not widespread on desktop OSes.
Most of those are heavily favouring security at the expense of interoperability and user control. Not exactly the direction I'd like things to see heading on the desktop. Compared to laptops and PCs, mobile devices are essentially interactive TVs.
But yeah, I want to see sandboxes everywhere and also enjoy the direction that OS X and Windows are going into that regard.
Somewhat of a side note, but the same holds true for many equivalent native apps as well.
> There's near-zero interoperability.
This is the problem that Tim Berners-Lee's SOLID is trying to solve: https://solid.inrupt.com/
Everything you described is an expected consequence of inner platform syndrome. It would have IMO been better if libraries like qt made cross-platform, network-driven desktop application development the preferred software delivery mechanism.
> If you haven’t already tried Pyodide in action, go try it now! (50MB download)
I wonder though how ready is it for production usage. At Repl.it, years ago, we moved away from browser-based execution to the cloud because it excluded many users who don't have the client-side firepower to download/parse/execute this much JS/WASM.
Repl.it can already run a lot of the examples here  but I gotta say the DOM-integration is pretty neat. We could do interactive stuff on Repl.it using our new Graphics infrastructure (GFX) but there is always the roundtrip delay. Here is the same matplotlib example running and streamed down on X11: https://repl.it/@amasad/matplotlib
Note that the browser has gotten a lot better in the years since repl.it moved away from running code there. In particular wasm parses and executes a lot faster and takes a lot less memory than JS.
What the appeal of this is Standardise on certain way to do things and we can share with others. So far all python and r notebook. Is browser an option?
"Tensors are stored as WebGL textures and mathematical operations are implemented in WebGL shaders." https://www.tensorflow.org/js/guide/platform_environment
So while I do some of my coding in the browser, the code executed is an EC2 far far away, close to my data, with excellent networking etc. There is very little that a Python stack directly in my browser would offer, to me (!) at least (your mileage may vary).
Tell me more.
Of course that is a huge job and might very well never happen.
People are using a variety languages that compile to JS on the frontend for many years. Elm, TypeScript, ClojureScript, Reason, Scala.js, Fable, etc. In fact even modern JS is often compiled into backward compatible legacy-JS these days.
WebAssembly is porting the C machine model to the web, so for the foreseeable future JS is still going to be a better runtime for most GCd languages.
Actually, it seems there are some Python transpilers as well, though they don't seem as supported as what I listed.
I worked with a researcher a few years ago who used it to do somewhat complex frontend presentations.
It’s learn webpack, learn parcel, learn react, learn vue, learn typescript, learn redux, learn npm, learn yarn etc etc.
If it was python, at least it wouldn’t change every three months.
If I remember correctly, ActiveState was the one offering it.
I do not believe that.
It's pretty good for data science—I can't speak to DOM manipulation that much, I've only done very minimal work with it for that. My two cents: it's bit rough around the edges—the performance is noticeably worse, and for data science, with medium-sized datasets it can freeze a tab for 10-15 seconds. Sometimes you get an odd edge case though where standard Python cannot run perfectly due to some combination of operations.
That said, overall, I'm really excited by where it's going, and I've always been able to work around the limitations.
If I could just port my jupyter notebook+widgets+plots into an Electron window and ship it to people that'd be awesome.
and this is the sample app:
I prefer the look of the JSMD format to the Jupyter notebook format and hope that is something that might get integrated into JupyterHub as well.
Question: is there some documentation for deploying your own Iodide server behind a firewall? The original Iodide post mentions this is possible.
I mean surely everyone can just have real python/jupyterlab - if you run it in Docker it's easy to handle the dependencies etc.
Am I missing something?
For me, the most important aspect of Pyodide is that it allows the scientific Python stack to go everywhere the Web goes. And the Web goes everywhere.
That means I can share a link to a Pyodide notebook, and the recipient gets a fully interactive experience, no installation required. And that's really important for the generation that's getting their start in computing through classroom iPads, Chromebooks, etc. which do not easily allow for "real" Python.
have a single runtime (js) execute both your front end and backend code.
This would easily make sense where you want to execute your app's logic in Python, but want to keep the UI in HTML and JS.
A nice side benefit I foresee would be easier coordination of state between a JS front end and a Python process running specialized computations. The current best approach I'm aware of now involves sending data back and forth as JSON and trying to mirror what's going in Python with JS state management. The system described in this article where JS and Python can reference the same objects (and somewhat the same typing?) sounds like it could be an improvement.
This is also the way Java is supposed to be deployed on client machines nowadays, apparently - using the jlink tool to create a bundle of the Java runtime for your application.
I guess if you're using JS features too then there might be other benefits.
Minor interest. Can this approach run j/k or more important lisp?
(Sorry, couldn't resist).
And here is a draft spec for WebGL2 Compute: https://www.khronos.org/registry/webgl/specs/latest/2.0-comp...
Only a matter of time until this will be available in chrome and firefox by default. Probably never in Safari though, since it doesn't even support WebGL 2.
Apparently one snag in all this is that Apple's OpenGL version is stuck in a time before compute shaders, and all Mac/iOS browsers currently implement WebGL on top of OpenGL. (ANGLE doesn't support Metal).
Compute shaders are just a shader type in OpenGL (and possibly a future version of WebGL) that have some convenient properties, for example they can more easily run out of step with the rendering pipeling if you have an application wanting to mix OpenGL/webGL graphics and non-graphics compute concurrently. See eg https://www.khronos.org/opengl/wiki/Compute_Shader#Dispatch
So no, this isn't a web thing and not usefull for web apps/web pages, except maybe for a very limited and small set of use cases.