I'm not sure why WebGPU is a step too far but WebGL isn't? Every other API for using a GPU went the same direction; why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan? The security properties of both are very similar, Vulkan/Metal/DX12 just lets you skip varying levels of compatibility nonsense inherent in old graphics APIs.
> why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan?
Because web browsers are supposed to be locked down and able to run untrusted code, not an operating system that reinvents all the same failings of actual operating systems. They should be functionality impaired in favor of safety as much as possible. For the same reason you don't get access to high precision timing in browser (a lesson that took a while to learn!), you shouldn't have arbitrary capabilities piled onto it.
Those are all historical remnants. Modern web browsers serve a radically different purpose than they did in the 90s. It doesn't make sense to even keep calling them "web browsers" since most people don't know what a "web" is, let alone "browse" it.
Modern browsers are application runtimes with a very flexible delivery mechanism. It's really up to web developers to decide what features this system should have to enable rich experiences for their users. Declaring that they should be functionally impaired or what they "should be" without taking into account the user experience we want to deliver is the wrong way of approaching this.
To be clear: I do think we should take security very seriously, especially in the one program people use the most. I also think reinventing operating systems to run within other operating systems is silly. But the web browser has become the primary application runtime and is how most people experience computing, so enabling it to deliver rich user experiences is inevitable. Doing this without compromising security or privacy is a very difficult problem, which should be addressed. It's not like the web is not a security and privacy nightmare without this already. So the solution is not to restrict functionality in order to safeguard security, but to find a way to implement these features securely and safely.
> Modern browsers are application runtimes with a very flexible delivery mechanism.
Clearly this is true. But as someone with an old-school preference for native applications over webapps (mostly for performance/ux/privacy reasons) it irritates me that I need to use an everything app just to browse HN or Wikipedia. I don't want to go all hairshirt and start using Lynx, I just want something with decent ux and a smaller vulnerability surface.
> it irritates me that I need to use an everything app just to browse HN or Wikipedia
But why?
That feels like saying it irritates someone they need to run Windows in order to run Notepad, when they don't need the capabilities of Photoshop at the moment.
An everything app is for everything. Including the simple things.
The last thing I'd want is to have to use one browser for simpler sites and another for more complex sites and webapps and constantly have to remember which one was for which.
Some of us don't use the web for anything other than websites. I'm honestly not even sure what people are talking about with some proliferation of "apps". There's discord/slack, and...? And chat was on the road to being an open protocol until Google/Facebook saw the potential for lockin and both dropped XMPP.
I already have an operating system. It's like saying I don't need notepad to be able to execute arbitrary programs with 3D capabilities and listen sockets because it's a text editor.
You also wouldn't need to remember what your generic sandbox app runtime is. Use your browser, and if you click on a link to an app, you'll be prompted to open the link using your default handler for that mime type.
> I'm honestly not even sure what people are talking about with some proliferation of "apps". There's discord/slack, and...?
Are you not familiar with Gmail or Google Maps or YouTube?
> I already have an operating system.
But Gmail and Google Maps and YouTube don't run on the OS. And this is a feature -- I can log into my Gmail on any browser without having to install anything. Life is so much easier when you don't have to install software, but just open a link.
> Use your browser, and if you click on a link to an app, you'll be prompted to open the link using your default handler for that mime type.
But I like having news links in Gmail open in a new tab in the same window. The last thing I want is to be juggling windows between different applications when tabs in the same app are such a superior UX.
Imagine how annoying it would be if my "app" browser had tabs for Gmail and Maps and YouTube and my "docs" browser had tabs for the NYT and WaPo and CNN, and I couldn't mix them?
Or if the NYT only worked in my "docs" browser, but opening a link to its crossword puzzle opened in my "apps" browser instead?
That's a terrible user experience for zero benefit at all.
(And I still would have to remember which is which, even if there's a MIME type, for when I want to go back to a tab I already opened!)
Calling gmail or youtube apps is already kind of a stretch. Gmail splits everything into separate web pages with the associated loading times and need to navigate back and forth. Exacerbating this is that it paginates things, which is something you only ever see in web pages. It lacks basic features you'd expect out of an application like ability to resize UI panes. Youtube has a custom, worse version of a <video> tag to prevent you from saving the videos (even CC licensed ones, which is probably a license violation), but is otherwise a bunch of minimally interactive web pages.
Maps is legitimately an interactive application, though I'd be surprised if most people don't use a dedicated app for it.
The point is you wouldn't have an "apps browser" with tabs. If something is nontrivial, launch it as an actual application, and let the browser be about browsing websites with minimal scripting like the crossword puzzle. Honestly there probably should be friction with launching apps because it's a horrible idea to randomly run code from every page you browse to, and expanding the scope of what that code is allowed to do is just piling on more bad ideas.
> it irritates me that I need to use an everything app just to browse HN or Wikipedia.
...this is possibly missing the point, but it occurs to me that you don't have to. Hacker News and Wikipedia are two websites I'd expect to work perfectly well in e.g. Links.
It's a bigger problem if you want to read the New York Times. I don't know whether the raw html is compatible, but if nothing else you have to log in to get past their paywall.
I don't necessarily disagree. But there's no going back now. There's a demand for rich user experiences that are not as easy to implement or deliver via legacy operating systems. So there's no point in arguing to keep functionality out of web browsers, since there is no practical alternative for it.
If rich ux can be delivered in a web browser then it can be delivered in a native app. I'd assert that the reason this is uncommon now (with the exception of games) is economic not technological.
It is partly economic, but I would say that it's more of a matter of convenience. Developing a web application is more approachable than a native app, and the pool of web developers is larger. Users also don't want the burden of installing and upgrading apps, they just want them available. Even traditional app stores that mobile devices popularized are antiquated now. Requesting a specific app by its unique identifier, which is what web URLs are, is much more user friendly than navigating an app store, let alone downloading an app on a traditional operating system and dealing with a hundred different "package managers", and all the associated issues that come with that.
Some app stores and package managers automate a lot of this complexity to simplify the UX, and all of them use the web in the background anyway, but the experience is far from just loading a web URL in a browser.
And native apps on most platforms are also a security nightmare, which is why there is a lot of momentum to replicate the mobile and web sandboxing model on traditional OSs, which is something that web browsers have had for a long time.
The answer is somewhere in the middle. We need better and more secure operating systems that replicate some of the web model, and we need more capable and featureful "web browsers" that deliver the same experience as native apps. There have been numerous attempts at both approaches over the past decade+ with varying degrees of success, but there is still a lot of work to be done.
Every package manager I know of lets you install a package directly without any kind of Internet connection (I haven't tried much, but I've run into CORS errors with file URIs that suggest browser authors don't want those to work). They also--critically--allow you to not update your software.
The web today is mostly a media consumption platform. Applications for people who want to use their computer as a tool rather than a toy don't fit the model of "connect to some URL and hope your tools are still there".
The difference is in the learning curve. On Windows, making a native app usually requires you to install a bunch of things - a compiler, a specific code editor, etc - in order to even be able to start learning.
Meanwhile, while that's also true for web apps, you can get started with learning HTML and basic JavaScript in Notepad, with no extra software needed. (Of course, you might then progress to actually using compilers like TypeScript, frameworks like React, and so on, but you don't need them to start learning.)
There's always been a much higher perceived barrier to be able to make native apps in Windows, whereas it's easier to get started with web development.
That settles it then. Let's remove all the innovations of the past 30 years that have allowed the web to deliver rich user experiences, and leave developers to innovate with static HTML alone. Who needs JavaScript and CSS anyway?
Seriously, don't you see the incongruity of your statement?
Putting everything, I mean everything into the browser, and arguing for it, is stupid. It stops becoming a browser then and becomes a native sytem, with the problems of the native systems accessing the open wild all over again. And then? Will be there a sandbox inside the browser/new-OS for the sake of security then? Sanbox into a not so sandbox anymore?
Modern operating systems are bad and they are not going to be fixed. So Browser is another attempt at creating better operating system.
Why modern operating systems are bad:
1. Desktop OS allow installation of unrestricted applications. And actually most applications are unrestricted. While there are attempts at creating containerised applications, those attempts are weak and not popular. When I'm installing World of Warcraft, its installer silently adds trusted root certificate into my computer.
2. Mobile OS are walled gardens. You can't just run anything, you need to jump through many hoops at best or live in certain countries at worst.
3. There's no common ground for every operating system. Every operating system is different, has completely different APIs. While there are frameworks which try to abstract those things, those frameworks adds their own pile of issues.
Browser just fixes everything. It provides secure sandbox which is trusted by billions of users. It does not restrict user in any way, there's no "Website Store" or something like that, you can open everything and you can bring your app online within few minutes. It provides an uniform API which is enough to create many kinds of applications and it'll run everywhere: iPhone, Pixel, Macbook, Surface, Thinkpad.
Unrestricted app installation is not bad. It's a trade-off. It's freedom to use your own hardware how you want versus 'safety' and restriction imposed by some central authority which aims to
profit. Fuck app stores, generally speaking. I prefer to determine what source to trust myself and not be charged (directly or indirectly) to put software on my own system.
An overwhelming majority of the apps does not need full device access. All they need is to draw to the window and talk with network.
Yes, there are apps which might need full filesystem access, for example to measure directory sizes or to search things on the filesystem. There are apps to check neighbour WiFi for security which need very full access to WiFi adapter and that's fine. But those apps could use another way of installation, like entering password 3 times and dancing for 1 minute, to ensure that user understands the full implications of giving such an access.
My point is that on typical desktop operating system today, typical application has too much access and many applications actually use that access for bad things, like spying for user, installing their own startup launchers, updaters and whatnot. Web does that better. You can't make your webapp to open when browser starts, unless you ask user to perform a complicated sequence of actions. You can't make your webapp to access my ssh key unless you ask me to drag it into a webpage.
I agree. I'm not knowledgable enough to say for sure, but my intuition is that the total complexity of WebGPU (browser implementation + driver) is usually less than the total complexity of WebGL.
WebGL is like letting your browser use your GPU with a condom on, and WebGPU is doing the same without one. The indirection is useful for safety assuming people maintain the standard and think about it. Opening up capability in the browser needs to be a careful process. It has not been recently.
It's my understanding that the browsers use a translation layer (such as ANGLE) between both WebGL and WebGPU and a preferred lower level native API (Vulkan or Metal). In this regard I don't believe WebGL has any more or less protection than WebGPU. It's not right to confuse abstraction with a layer of security.
My analogy was bad and I'd probably be wrong as you (and your sibling post) say to expect WebGPU to have any lurking dangers as compared to WebGL. I was mainly trying to express concern with new APIs and capabilities being regularly added, and the danger inherent in growing these surfaces.
It's clear that you know nothing about how WebGL or WebGPU are implemented. WebGPU is not more "raw" than WebGL. You should stop speaking confidently on these topics and misleading people who don't realize that you are not an expert.
I'd dispute that I know nothing. I'm not an expert but have worked with both, mostly WebGL. Anyways, sorry, it was a bad analogy and you're right, I don't know enough, particularly to say that WebGPU has any unique flaws or exposes any problems not in WebGL. I'm merely suspicious that it could, and maybe that is just from ignorance in this case.
That's incorrect, WebGPU has the exact same security guarantees as WebGL, if anything the specification is even stricter to completely eliminate UB (which native 3D APIs are surprisingly full of). But no data or shader code makes it to the GPU driver without thorough validation both in WebGL and WebGPU (WebGPU *may* suffer from implementation bugs just the same as WebGL of course).
> Opening up capability in the browser needs to be a careful process. It has not been recently.
That's what about 95% of the WebGPU design process is about and why it takes so long (the design process started in 2017). Creating a cross-platform 3D API is trivial, doing this with web security requirements is not.
Both WebGL and WebGPU should be locked behind permission because they allow fingerprinting user's hardware (also they provide the name of user's graphic card). And because they expose your GPU drivers to the whole world.