Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Clang runs in the browser and compiles C++ to WebAssembly (tbfleming.github.io)
427 points by lifthrasiir on Jan 2, 2018 | hide | past | favorite | 308 comments


Clang is pretty big -- as a comparison, my Z80 IDE which uses SDCC (Simple Device C Compiler, i.e. no C++) loads about 2 MB of WASM + 2 MB of data and executes a compile/assemble/link cycle in about 0.5 sec for a small program:

http://8bitworkshop.com/v2.1.0/?platform=mw8080bw&file=gfxte...

Most of the execution time is spent in SDCC's register allocator, which was recently rewritten to generate better code but unfortunately is a lot slower than the old allocator.


What a cool website (and book), I love what you have accomplished.


I guess this just means we’re one step closer to Gary Bernhardt’s vision:

https://www.destroyallsoftware.com/talks/the-birth-and-death...


Every time something like this is posted, I always think of that video.

As ridiculous and played-for-laughs as it is, it's looking more and more accurate (in one form or another) every day.


If we have learned anything in the past 2 years, it is that the difference between parody and reality is vanishingly small.


What happens when the stuff like the “exclusion zone” and the war from 2020–2025 come true...? ¯\_(ツ)_/¯


Next we'll get to a webassembly-only VM that will replace the OS. All 'apps' will run on this VM instead of being native, and most will be cross platform. They'll talk to each other via messaging (using Javascript semantics) instead of bytes-over-pipes as they do today. An integrated globally available, namespaced data store API might replace the filesystem. Each app+version will be accessible by a distinct URL. 'My computer' will finally be fully virtual - a well defined collection of 'apps/data' URLs that materializes wherever I can open one of these webassembly VMs.

With this webassembly VM, early bound (precompiled) lower level languages may not have a strong advantage over late bound, dynamic ones, because the compiler is part of the OS layer (as it should be).

The world seems to be slowly moving, in a roundabout way, towards the OS-browser idea from Alan Kay.


Add metrics and advertising obsessed companies into the mix and the only result could be a complete clusterfuck.

The major issue with web apps is that the customer (or depending on the company bag of meat that watches the ads) has no control over the app or the data. They will be abused by the company in all sorts of creative ways.

And yes, those VMs could be local-only, but why would Google, MS or Mozilla want that? The former two are heavy in the cloud+ads business, the latter was pushing FirefoxOS. Why would web developers want that? All the stuff they write is online-first.


I already see plenty of obfuscated JavaScript out in the wild. Script names have opaque GUIDs and slugs, and the code itself is inscrutable: defining single-character constants and recombining them later, dynamically constructing scripts to be dynamically injected into the page and then run, etc.

I would like to believe that this is done in the name of performance, but my gut tells me this is done in the name of keeping one step ahead of the ad blockers.

As it stands now, I feel that I as a user have no defense against tracking and fingerprinting except by monitoring and controlling access to the DOM at the level of individual operations. I am afraid this is only going to get worse with WebAssembly. "Audit the code yourself" no longer a viable solution except for a handful of experts.

On the bright side, perhaps WebAssembly will lead to fewer ways of interacting with the DOM, which could make it easier to detect malicious/undesired interactions.


> Add metrics and advertising obsessed companies into the mix and the only result could be a complete clusterfuck.

Case in point: The Web.


The alternative is advertising built into the OS like Ubuntu did and MS and Amazon still do, tracking built into the OS like Google and MS do, etc. Having the trackers run in a sandbox is not great but it's a little better than letting them all run natively.


> Add metrics and advertising...

I assume the ads would be at the messaging system, so even if a variable contains 'iphone' the ads start to appear.

Joking aside, we need a better security system because we never know if an arbitrary compiler is not embedding a malware payload.


I am hoping to time my exit from the field to coincide with this clusterfuck singularity.



Replace “WebAssembly VM” with “JVM” and you will see how it’s already available, and why it won’t catch on.


JVM had to be installed, and it was far from seamless the way OP describes it.

The advantage of wasm is that all the people running Chrome today already have it.


Plus the java installer would sneak in some junkware/toolbars in the process


Yes, thankfully no web browser vendor would ever do such a thing.


The big difference is that the "Browser VM" has well defined permissions to access resources on the host machine. Just like phone OSes do today. Bluetooth API, Files API, Location API, etc.


At which point you have an OS... except adding the word ‘webassembly’ somehow imbues it with properties OS’s don’t already because somehow web assembly compiles to assembly with extra security with no performance reduction.


Webassembly doesn't require you to install Oracle software, or lag your computer for 30 seconds while it starts up... And it doesn't require Swing or AWT. So, there are those minor improvements.


Swing is a very good improvement over HTML UI hacks.


Java doesn't require Swing or AWT either.


The JVM doesn't compete with C or C++.


JVM Bytecode is not a low level bytecode. It doesn't support value types or aribtrary pointer manipulation within a sandboxed memory area. It has a lot of java semantics baked in. Performance in most cases is better than javascript but it still is in the same order of magnitude because you have little control over the memory layout.

Webassembly solves a problem that existing low level bytecodes like LLVM IR do not intend to solve. LLVM IR is not stable across LLVM versions and usually is optimized for a specific architecture.

JVM bytecode is stable but high level. LLVM IR is unstable and low level. WebAssembly is stable and low level.

They all solve different problems.


Every time I see such news, I tell myself maybe it is the time we port a browser into the Linux kernel and build METAL, like Gary Bernhardt foretold.

https://www.destroyallsoftware.com/talks/the-birth-and-death...


Or directly into the cpu. Since there already is a web server in the cpu [1], adding a browser might enable us to run and use web applications without even installing an operating system.

[1] https://www.networkworld.com/article/3236064


The METAL (proposed by Gary Bernhardt) is not about running in CPU, but running all software in byte code and render native code obsolete, by implementing software process isolation and save the overhead of syscalls, memory mapping and protection rings.


IBM’s AS/400=eServer is basically implemented this way since 1988 years (some of it going back to 1979). Everything old is new again.

(It has 128 but pointers, btw - so it’s future proof for at least 20 more years)


The original bytecode OS is AFAIK the UCSD p-System (from 1978) https://en.wikipedia.org/wiki/UCSD_Pascal


Having some random thoughts about the comments here, some of which talk about having a "browser" drive large parts of what we consider today an "operating system". I agree that certain aspects of what most modern operating systems do today can be abstracted away behind a unified, convenient (and perhaps browser-accessible) API. The entire user-experience stack comes to mind immediately, but there are counter examples as-well. The answer to non-unified execution environments (as in different operating systems and architectures) was the rise of high-level, interpreted languages (in addition to corresponding VMs), and runtime environments which exposed a platform-independent set of APIs. We've been exploring the possibilities of implementing bytecode VMs in hardware (https://en.wikipedia.org/wiki/Java_processor), or running a managed code operating system (Microsoft's Midori) for a while now, so the concept itself isn't particularly new. What is WASM's advantage over, say, Java's bytecode? Did we really have to go through so many hoops (reinvent another instruction set and a VM that can execute it across multiple platforms)? Why wasn't an existing technology re-used here to achieve the same/similar goals?


Well, what would you have replace it?

Like most things that become very popular, the web is becoming an os-like platform because it solves a problem. Putting your application at a URL is hugely more accessible than asking someone to download something. Maybe not to us HN readers, but to my mom, who's skeptical to a fault of anything that says download, it's huge.

HTML+CSS+JS is everywhere and complete in a way that literally nothing else is. Yeah C is everywhere but (before nowish) you couldn't write a C application once and have it run the same pretty much everywhere.

But chuck, you say, HTML has notoriously bad compatibility and working with it is frustrating. I propose to you that HTML and CSS have so many compatibility problems because of how pervasive it is and has been for so long, and how many different implementations of it exist. Anything is bound to develop compatibility issues when it gets as large as the web.

But you're right, it's suboptimal and other tech has tried to be everywhere. Java and Flash did try, but have you ever used a java applet? They're fine I guess, but they're stuck in a square on the page. You can make a java or flash object take up the whole page and completely control the website but then you run into accessibility issues and complexity on your part.

Javascript has grown organically over the past decades for a reason, and it's being used in this fucked up way because, all things considered, it's the best platform for the job right now


Appreciate your insights. I don't really have an answer to your question. I respect the popularity factor, I just can't help but think that we've had (and still currently have) competing technologies which are far more mature and capable. Not saying WASM won't end up on top in the future. There are plenty of applications today that build on parallelisation, synchronisation primitives, IO, IPC... all of which are already perfectly possible today over the JRE/JVM (as an example), but the same can't be said about browsers today (no doubt that with enough work, it can all change sometime in the future). Just feels like reinventing the wheel.


I think it just happens that browser is the most widely installed VM independent of a single major vendor, pretty cross-platform and with enough features to build applications of any complexity. If you would design that from scratch, I think that something much simpler could be designed, much simpler than JVM, for example. But how would you install it to every computer out there? Nobody would use it if it's not installed everywhere.


>What is WASM's advantage over, say, Java's bytecode?

Precise control over memory layout and allocation which can translate into massive performance gains of several orders of magnitude.

LLVM IR isn't stable accross LLVM versions.

Webassembly is the solution to a problem that stayed unsolved for decades.

There is no alternative to webassembly. It is in it's own category.

Probably the closest competitor you could find is LuaJIT with it's FFI module but LuaJIT wasn't designed as a compilation target. It'd suffer from the same problems that asm.js did.

JVM bytecode is stable but high level.

LLVM IR is unstable and low level.

WebAssembly is stable and low level.


  Precise control over memory layout and allocation which can translate into massive performance gains of several orders of magnitude.
"Can translate" is not "translates to". Performance gains can be achieved when executing an efficient instruction set over a performant VM. Are you arguing that the JVM falls short as opposed to a browser in that regard?

Also, with most operating systems, physical memory is abstracted from user-mode processes entirely. Kernel-mode is a different story, but I fail to see how a browser/WASM in user-mode do better than a JVM in user-mode in that regard.

  LLVM IR is unstable and low level.
Interesting. What makes an instruction set "stable"? Is this about being backwards/forward compatible? About being encoded/decoded efficiently?

What about the actual APIs exposed by the browser to WASM? Are those considered "stable"? Browsers today "break" all the time - inconsistencies between minors of the same browser, let alone different browser vendors.


As usual, politics.


People might not like it but i really think native apps are going to die once everyone figures out how to optimize this crazy web stack.


It's definitely possible but we went through all of this a decade ago with Adobe Air, Microsoft Silverlight, and JavaFX. Nearly a decade before that, in the early 2000s, we went through this with Macromedia Flash, Java Applets, and Microsoft ActiveX.

Trying to replace native applications with runs-in-the-browser RIAs is nothing new, but we're at least finally agreeing on the language/implementation, so maybe this time will truly be different.


> Trying to replace native applications with runs-in-the-browser RIAs is nothing new

There is one thing that is new: the users do not need to install a separate piece of software, and it's kept up to date by the browser vendor - which means for enterprise users one less piece of software to certify/test for updates/compatibility, and for private users one software less which installed adware on each update (Java IIRC still does) and has that "risky" association like Flash does.


Flash Player had something like 97% penetration, so for all intents and purposes it was something that you could always consider as being "there" - and users didn't mind having to install it.

I think a lot of techies underestimate people's willingness to install stuff on their PCs, especially when that stuff is needed to do tasks they really want and are a couple of clicks away.


People use the "average user" as a strawman. He is simultaneously too stupid not to install malware that claims to be a fax, and yet unwilling to exert even the slightest effort to install software he needs to get things done.


> and yet unwilling to exert even the slightest effort to install software he needs to get things done.

Enterprises often disabled (or heavily whitelisted) both Flash and Java in the browser due to malware concerns as they were (and still are) an effective vector for malware.


in addition, each had an separate graphics api. Which was imperative, not as stable nor backward compatible nor as battlefield-tested as html. Each keyword is a difference between all previous approaches versus html. And sometimes/often regarded as better.

Every programmer knows html though not the frameworks. But these are seperate.


Yeah, because I can so write a website and only test it in one browser and be sure it will work on all of them, right?


Mostly, which is enough to attract lots and lots of amateurs and high-schoolers. Then there's also the standardization committees and special frameworks.

Also the two most popular browsers are multiplatform. You can write a website and make it work on whatever computer you need.


I'm not sure why you are down voted as all that is true and easily verified.


It's not comparable for one simple reason - all the listed techs worked in the browser, but they didn't come with the browser (or, at least, with all browsers). WebAssembly does - and because most browsers auto-update these days, users get on the bandwagon without doing anything at all.


I don't think there were many mainstream browsers without Flash, ActiveX was included in IE when it was the dominant browser.


The projects you mentioned can also solve the portability and distribution parts for apps (like browsers can now), but that's not what the web originally was intended to don

This time will truly be different because it developed out of a document viewer.


Isn't WebAssembly a VM inside browser ?

If so, I am skeptical especially with end of moore's law.

Edit: I was a bit mis-informed, the design document is an interesting read.. https://github.com/WebAssembly/design/blob/master/Rationale....


It is a VM, stack-based like the JVM. It is called a "WebAssembly VM" in the document.


Unlike JVM, though, it has semantics that make it possible to compile to optimized native code easily, without having to do complicated analysis for more advanced optimizations.


I don't think so, not when my native game needs 50GB of install space. There is no way I'm waiting for that progress bar to download in a webassembly version game. This is one area where native will always have an advantage. The network is no match for the local bus.


Downloading it from steam takes just as mush time.

But that 50GB is probably for more then 100 hours of playtime, which amounts to a persistent 145kb/s. With some caching up front, I think this could actually work. There literally is no need to have everything on disk before starting. Maybe a few hundred megs before starting, but even that is less then a few minutes on most connections, which is a lot faster then downloading the full game before starting.


You need all the art and sound for the first few minutes. Probably a lot more than 300MB.


PS4 has gone this way already. When downloading a game you might get two progress bars. You can start the game when the first one is done, and it will download the rest of the game while you're playing. Microsoft has had this tech for a long time, I know they use it for MS Office programs at least.


Browsers are pretty terrible at downloading more than a few hundred MB. They can't compare to native download managers like Steam or Blizzard's, or even wget.


I agree, once browser get a good grip on CPU parallelism things will probably be all downhill from there.


How many threads do you think the browsers will allow on a single page?


As many as is needed. And as many processors as are needed. As someone who writes browser based 3D games, I need more power!


A web page should probably not be allowed to use unlimited amounts of computing power without asking for permission first.


I agree! Like 0xfeba pointed out, really annoying things like coinhive could eat all your CPU if no permission was required. Some sort of web API that tells the browser how many threads it can allocate for rendering and code execution. If the thread count exceeds a configured maximum the browser can ask the user for permission.


I don't see why the foreground web page shouldn't have as much access to my desktop's CPU as any native program.


eg coinhive


We need this!


I don't think it'll go that way, i remember when web workers first came out i wrote a simple raytracer with some GI and optionally used web workers if available for multiple threads. It worked fine until some browser updates later that limited the amount of web workers pages could use (or the amount of time each worker got, i don't remember, it has been a few years) making the web worker version have pretty much the same speed as the regular version.


That is disappointing to hear, I am hoping webassembly will loosen that back up a bit.


Out of curiosity, what technologies are you leveraging to write browser-based 3D games? Lightweight WebGL libraries, something higher-level like Unity, or something else entirely?


Rolling my own graphics engine (ugh!) to create a voxel octree platform. Front end uses C++ to Webassembly for the engine core and Typescript for the UX. Backend uses Java with a MSOA. All using a reactive message system.


Currently quite a lot, more than 10 with the current stable Firefox release, here's a test page: http://pmav.eu/stuff/javascript-webworkers/

(Web workers are around since ca. 2010 so this is not very new)


Do they require you to serialise data to strings or is it possible to transmit typed arrays?


There is support for efficiently passing data around. You can directly get binary typed arrays from XmlHttpRequest without intermediate processing, and you can pass them around without memory copies to WebWorkers using Transferables.

More recently you have shared typed arrays that work like shared memory between threads (as of Safari 10.1 / Chrome 60, Firefox too)


The browser is gonna become like a front for a distributed OS.


Yep. We already have threads.

What I find most painful about this development is that this new platform requires Javascript and wrapping everything inside an HTML document.


WebAssebly is here to let one replace Javascript. Not to depend on it.

(It's the other way around, if you want to keep using Javascript, the expectation is that you'll start to depend on WebAssemby in the future.)


It might go that way eventually, but right now, you still have to deal with JS at least to load up the wasm program and get a useful result out of it to present to the user.


I guess we will see some changes to this as well, if Webassembly really takes off. Which is what it looks like. Could be interesting to see the kind of security issues this brings.


Back in 2010 I considered native to be a fad and that we'd eventually go back to web (just like native desktop apps have gone to web.)

However, 8 years later, things really don't seem to be moving in that direction. Native is always going to be one step ahead in terms of speed and ability.

We would need to see native features start to stall so the web can catch up in ability, and websites need to do a lot of work to catch up in speed.


For every step towards browser-based apps there seems to be a step back again - e.g. React Native, Electron and Android Play on ChromeOS.

Back when the first iPhones debuted without an app store, Apple claimed that everyone could just use web apps. That didn't last long.


I disagree. When people start using native languages like C++ for the Web, they will quickly realize they could have just used C++ to begin with.


You get several things with with Web Assembly that you don't get with a straight application:

* Easy distribution without installation

* No worries about dependencies for end users

* Much more consistent, and in most cases, better security

* Cross platform

* No walled gardens


* Now you have signups, instead of installation

* Only Linux end users have to care much about dependencies

* You're kidding right?

* Sure, if you only target one browser your code will probably work more or less the same on that one browser on at least some of the OSs it runs on

* There aren't any walled gardens in PC Destkop land worth talking about, other than maybe the package repositories of Linux distributions.


This is a pretty poor response to the parent's arguments...

>Now you have signups, instead of installation

How are these mutually exclusive? Plenty of desktop software requires a signup first. And plenty of webapps do not require a login.

>Only Linux end users have to care much about dependencies

It's rare to find a game that doesn't start by installing DirectX, Visual C++ Redistributable, and more.

>You're kidding right?

Modern browsers have powerful sandboxes and always-active updates to address security threats. Sounding indignant isn't actually making a point.

>Sure, if you only target one browser your code will probably work more or less the same on that one browser on at least some of the OSs it runs on

Browsers are very standardized these days. It's very rare that I write code that doesn't work cross-plat immediately.

>There aren't any walled gardens in PC Destkop land worth talking about, other than maybe the package repositories of Linux distributions.

And on mobile? Remember we're talking true cross platform here. Apple's App Store has far more limitations than you'll run into on the web.


>How are these mutually exclusive? Plenty of desktop software requires a signup first. And plenty of webapps do not require a login.

Some != all, it's definitely -not- the majority.

>Modern browsers have powerful sandboxes and always-active updates to address security threats. Sounding indignant isn't actually making a point.

I have a trusted bloated ball of incomprehensible code running whatever it wants with direct access to memory, video and my entire filesystem. Revoking access to the first set of things is impossible and limiting filesystem access is unlikely to be feasible or actionable for most people.

The Browser is the epitome of complication, people tout security in the browser as if it's infallible yet through complication bugs are more common and prolific. Browsers are not a magic bullet no matter how much there is a wish to "pass the buck".

All you're doing by making browsers more complicated is:

A) Giving more attack surface

B) Increasing complexity to audit.

C) Ensuring the big guys are never challenged.


> How are these mutually exclusive? Plenty of desktop software requires a signup first. And plenty of webapps do not require a login.

It isn't. My point is, signups are at least as inconvenient as having to install things, and are quite prevalent on the platform. It doesn't make much sense to credit the web's success on removing the need for installs. Hell, it was still super popular even when installing Flash was basically a requirement.

> It's rare to find a game that doesn't start by installing DirectX, Visual C++ Redistributable, and more.

All of which is handled automatically for the user (and regardless is usually unnecessary and only done the way it is because of Microsoft's distribution license terms for DirectX).

>Modern browsers have powerful sandboxes and always-active updates to address security threats. Sounding indignant isn't actually making a point.

OS's have always active updates, in Windows's case whether you want them or not, and plenty of security options. Web browsers have a huge attack surface that has been exploited at least as often as OSs.

> Browsers are very standardized these days. It's very rare that I write code that doesn't work cross-plat immediately.

Impressive. I guess all those frameworks out there that are meant to abstract away all the implementation quirks between IE's various iterations, Edge, Chrome, and FF are useless then?

> And on mobile? Remember we're talking true cross platform here. Apple's App Store has far more limitations than you'll run into on the web.

Mobile web is still a goddamned garbage fire as far as I can tell. There's a reason all those popular web sites have native apps in the app store.


Getting people to download and install traditional software is a substantial point of friction. Average users hate managing software installations, updating, fixing broken installations, dealing with malware, et al. The problem spans from the enterprise down to normal consumers.

Browsers as a better software platform, will significantly accelerate the rate at which software eats everything.

Most software should ideally just work, no installs, period. The browser can act as that platform and it can go a lot further than it has so far. More to the point, there is no other future outcome than the browser becoming that vehicle, nothing can stop it from moving that direction at this point.

You don't need something equivalent to maximum native performance to do what 99% of users want to do. The rare edge cases that need extreme performance will remain native.


Really this just points to a lack of vision on the part of operating systems people. There is no reason downloading, possibly "installing" (mostly caching?) and running "traditional software" couldn't be as frictionless as clicking on a link, but that's just not how the software and security model of Windows, OSX or Ubuntu (to name just the household names) are set up.

Browsers probably will win, but mostly by accident and as a result of network effects rather than any inherent qualities. As a platform it's positively awful, though it is getting better in fits and starts.

Ideally we'd have proper sandboxing, proper compilation, proper libraries, proper access to hardware. Alas, all of that good stuff is basically a historical footnote at this point.


I've been thinking lately there should be a file format that's just WebAssembly along with WebAssembly-versions of WebGL, Web Audio and WebRTC.

With the goal of making the runtime simpler to implement for non-browser venders. While browser vendors could add a profile to run the format.


... or to just load it up in the browser and skip the HTML part. Non-browser vendors will probably have to integrate browsers.


That completely misses the point which is a simpler format to implement and support.


It could've been done. Flatpak & consorts are trying to solve the portability problem, but packages are in the tens of megabytes. It probably won't happen now.

Android has tried out this app-streaming model, but seemingly about 0.04-heartedly.


It's not either/or. The browsers will probably eventually be integrated into the OS until they are almost invisible. All apps will be web apps.


Chrome must clearly see this as the end of the road on Windows and MacOS. What incentive do the OS vendors have to champion full-access APIs having feature parity for OS-level JS engines?


I hope the browser will be invisible soon, because I hate having it between me and my application.


There's a lisp machine story hiding somewhere over here.


There are a lot of things that still need figured out. Installation performs the functions of conserving my bandwidth and giving me a way to manage the software through the OS.

I select software because it does something that I want, in a way that I want it done. I avoid web-based software when possible because it has a tendency to shift underneath me and break the way that I want the software to work....or the company decides that it needs to pivot, and I lose access to something I was depending on.

Software that installs comes closer to "just works" than software hosted online, IMO. Make it available in the repo/store/whatever, and that gets rid of a lot of install friction.

> You don't need something equivalent to maximum native performance to do what 99% of users want to do. The rare edge cases that need extreme performance will remain native.

Throughput? Probably not. But please fix the dang latency. It makes software just painful.


App stores will solve most of this, if they haven't already.


Yeah, except for that 30% of your revenue vanishing.


I haven't looked much into WebAssembly, but regardless of the native language that you use, your app will be cross-platform without having to put any extra effort, won't it? This solves one of the biggest pain points of native GUI development. While it's pretty amazing what QT has managed to accomplish, I still hear incompatibility problems between OSs.


It "solves" it by not solving it - WebAssembly doesn't do UIs, that component is still built in HTML. The wasm functions are just doing the expensive computation. I think there's been some work to build DOM apis in languages that compile to wasm, but for the most part the workflow is to create your UI in markup or standard javascript and then call out to wasm functions to do things, or else building the UI in WebGL (e.g. games)


> WebAssembly doesn't do UIs, that component is still built in HTML

Your UI could be HTML, but equally it could be a WebGL-based UI. Some UI libraries already use OpenGL as a backend so adding support for a WebGL backend shouldn't be too hard.


> but equally it could be a WebGL-based UI

Then you might as well ship a native binary. Beyond cross-compilation and window system bindings, there aren't many more portability concerns. But then you're rebuilding the browser, so the question is whether you can do it better than Google/Microsoft/Apple by including only the truly necessary bloat for your use-case. Bonus: you get to avoid browser politics.


> Then you might as well ship a native binary.

It isn't an either\or choice. You can do both. WebAssembly gives your application a "zero install" option and native gives you better performance.


Can you point me to them? Thinking about using HTML to build an app with good UX lets magic smoke leave my head.


Qt can work in the browser. Here's a Qt blog post talking about their Emscripten support: https://blog.qt.io/blog/2015/09/25/qt-for-native-client-and-...

You can also stream Qt applications via WebGL (runs server side, UI streamed to the browser): https://blog.qt.io/blog/2017/07/07/qt-webgl-streaming-merged...

Some asm.js Qt examples: http://vps2.etotheipiplusone.com:30176/redmine/projects/emsc...

Some WebAssembly Qt examples: https://github.com/msorvig/qt-webassembly-examples


Qt should work.


Qt solves almost the same problems as a browser, all it's missing is sandboxing and fetching resources over the network.


...which the browser convenietly provides, hence my suggestion to use it, targeting webassembly and webgl.


only QtQuick thiugh, Qt Widgets aren't really optimized for GL


Qt Widgets are kind of legacy, as all new development efforts are focused on QML.


are they ? the latest version has a bunch of new things: https://wiki.qt.io/New_Features_in_Qt_5.10 and the git seems relatively active: https://github.com/qt/qtbase/tree/dev/src/widgets


C++ Widgets are focused on desktop applications, for mobile and embedded devices, QML is the only way to go.


And that's how it solves an additional problem.

You don't have a GUI toolkit pre-defined if you start out with c++ (yes, it takes freedom, but gives ease of development, which seems to be today's currency).


Web browsers cross the uncanny valley home to 99+% of open-source, free-as-in-beer, and even commercially supported cross-platform GUI toolkits. This is due in part to the freedom to evolve new UI/UX concepts, but primarily because incentives align for polishing rough edges.

Web browsers grew up with an always-on Internet connection -- continuously pushing updates to code and data. This has expanded to continuous updates "all the way down" (the native code browser auto-updating on top of an auto-updating OS).

Rebuilding core features such as these, even if by selecting unencumbered open-source libraries, basically means winding up in nearly the same place (since browsers are competing to optimize their implementations). Finding a way to connect to the browser foundation underneath whatever layer in the stack begins feeling bloated is worth evaluating, but there are diminishing returns.


I was going to say this is just for portability for average consumer facing applications you need shipped as easily as possible, but actually I've seen people developing "native" extensions for Node.js with WebAssembly.


You still need an abstraction layer between compiled program and the target architecture.


Native apps are often written in web technologies via Electron. It all depends on what you need and what access you can get. Plus, someone has to maintain the browser and its packages, so there will always be need for systems developers.


An Electron app is not considered a native app


Not on Hacker News perhaps, but certainly by the rest of the world.


Certainly not.


To the vast majority of users, anything in a browser is a web "site" and anything bundled with the OS or requiring an installer (including Electron apps) are "apps".


To the vast majority of users a virus is anything that's adversely affecting their system, should we use that definition too? The vast majority of people call the common cold the flu (around here anyway), should doctors change the definition of influenza to the layman one? All industries have their own terminology specialized terminology and they don't change it based on what naive end users think.


Sure, but they compete with native apps is the point the parent was making. They download, install, update, configure, and use features like a native app.


But they often don't perform like native apps, both when it comes to size, speed, and look and feel.


Most users are not so discerning. Once Windows ships with a native JS engine (perhaps Chakra), will streaming Electron-based apps without traditional "installation" be possible?


Windows has supported "Electron" apps since Windows 95, when Active Desktop was introduced.

Thankfully it has been largely ignored until web devs decided to start using Electron.


Windows already ships with a JS engine - it's a supported language for UWPs. I don't think you can stream them, though.


So Clang itself was compiled to WebAssembly? That's slick, even if it does take forever to load.


What about packing up web assembly into a single file container like runtime designed to run server side web applications? This .war (Webassembly ARchive) file could include the application and all dependencies in a single installable that is JITd on load. Might be useful for the next generation of "serverless" cloud runtimes.


I see what you did there.

I’d be all for this, honestly, except that the existing containers (yes, even Jetty) are too heavyweight for this use case.

What you really want is a stripped down server runtime that loads and starts your application’s archive, fires it up on a (possibly random) local port and opens a stripped down Chromium instance that loads the app from that port.

Of course, that sounds a bit like Electron.


Node.js supports webassembly.


>.war (Webassembly ARchive)

This is sarcasm, right? If it is, it's damn good.


This will probably happen. Hopefully properly compatibly.


clang is pretty big. There's about 55 MB of binaries on that page. When served from localhost, it takes about 10 seconds to load and compile a source file on my machine.


This gets us to two caching-related problems: What if you're offline and how do you decide what you want to be cached? But browser vendors will solve this.


Isn't this solved with progressive web apps? On the other hand I'm not aware of any UI that allows you to explicitly choose if a PWA should be cached or not.


2mins or so. Not bad.


2 minutes is truly forever on the web. I found myself wondering if there was something wrong with the page.


~20 sec in my Late 2013 MBP from page load to "The [cake] is a lie," in Chrome 63.

Sadly it does not work in Safari or Safari Tech Preview, but it works in Firefox 57 (known problem, see https://github.com/tbfleming/cib)


Finally, wasm short-circuited the will of os manufacturers to make their platform exclusive and incompatible with others.

Now let's see if wasm performs well enough on Android phones, and I guess we are finally arriving into the software deployment heaven.


So the mobile part is very interesting to me, as i'm no expert on this: as cool as it is, isn't this whole webassembly thing a desktop-only thing? There is no way smartphones will be able to download compile and run this type of code at accettable speed will they?

Or am i wrong?


This is just fantastic. The "happy 2018!" news i was waiting for.

As i've said before, little by little we're witnessing the end of Javascript dominance.


Webassembly is still loaded through javascript, webassembly "apps" will be like libraries. You could compile other languages to js before, but the performance gain is really awesome.


>You could compile other languages to js before

In the mid-term, there will be no more reasons to do that. Just compile those languages straight to Webassembly.


This is awesome. I was planning to write one myself after I couldn't find any such Clang runner for browsers. I guess I could just strip off a few overheads so I can run some pretty small programs to teach algorithms.


Idea - make a website that lets you compile and test any open source program. For example, I should be able to clone, compile, and run gedit (a stagnating text editor included with many linux distros). This would let me quickly experiment with and edit a variety of programs and tools within the web browser without needing to get a development system up and running. I bet open source projects can get a ton of new hackers to continue development on their programs if everything could be worked on quickly within a web browser.


Main issue is memory usage. Running KDE+Kate - http://vps2.etotheipiplusone.com:30176/redmine/emscripten-qt... - uses 1793MB (htop VIRT), 416MB (Chrome task manager). (Oh, and the renderer process wasn't hosting any other tabs.)

Second main issue is speed. qsterix isn't as memory-heavy but runs all Tetris game code inside KDE JavascriptCore, which as part of the KDE/Emscripten build is transpiled to JS. In other words, http://vps2.etotheipiplusone.com:30176/redmine/emscripten-qt... is JavaScript inside JavaScript, and on my reasonable-but-not-last-week-new i3 box, I experience a small pause that I can't explain when I hit the spacebar to drop a block.

Half of me wants to warn everyone from opening these on their phones or tablets, the other half wants to know how badly everything crashes. :P

Note: I tried opening the Kate demo a week or so ago on a marginally older version of Chrome, and the download phase hung. It worked this time though.



Download of normal Kate hangs for me, too. But the noicon variant works well. However, I don't get a keyboard for text input (Chrome on Android/HTC10). So, well, sorry to disappoint you :P

While Kate is already quite funny, I'm looking forward to someone porting Linux to the browser, so we can have a clientless desktop environment delivered from a serverless cloud ;) I really want to run vim/g++ in a terminator on KDE on Linux on my phone connected to a keyboard+screen (once, for the laughs).


I can't wait to package this up into a beautiful Electron app.


the whole of qt and kde compiled to javascript and ran as an electron app... i can't even


Something like that needs to be built into Github or a similar site. Along with fork and download, 'compile to web' should be an option.

Keeping code alive would also make long-term archiving of obsolete software viable, more so than just putting the code online, anyway. This is important because so much of our culture has been expressed by and tied up in software.


It's hard to just apply this to an arbitrary project, which may use nontrivial linker features or native code.


True. I think this would be a project within the open source community. Core libs (libXYZ) would be pre-compiled to WASM, and only the lib / program being hacked in would be modified in the browser. The compilation and linking would include the core libs. Maybe I will work on such a project, I will think about it.


Isn't this what containers do?


Yes, but on the web.


Cool! Now I just need the JVM as a clang target, and then I'll never have to use anything other than C++ ever again!! :D Webassembly has a tad more promise than ActiveX.



Perhaps you could use node.js to compile your C++ code into a native WASM module?


lol. Gross! Perhaps if Node.js... omg. Nevermind. You can call pthreads from Node.js. :)


This is really slick. Clang took about 20 secs to load on my quad core MBPro. How does this change web development ? Do we need an Ecmascript equivalent for C++ ?


>Ecmascript equivalent for C++

W-what?


Sorry, for sounding this stupid. Does this imply you can now do web programming with C++ instead of Javascript.


Yes. You can build WebAssembly bytecode from C++ and it will run in the browser like JavaScript.


Thank you. In that case is it possible to access DOM elements within a C++ program the way we do in Javascript.


Not yet, but AFAIK it will eventually.


so does the concept of the 'browser is the OS' , improve security? Is it analogous to a hypervisor running on a server somewhere, then when you log into the server, it spins up a vm os just for you. Now any hacking done on that os is isolated and disappears when that instance is closed. in the same way, when the browser is the os, none of the files,apps or stuff on your machine is available to it?


So we could finally port windows93(.net) to C++?


There is actually an immediate real world use for this (not just being a toy): online judge (like leetcode). Of course, the final evaluation has to be done on the server to avoid cheating on the test set, but the user can do his/her own test in the browser without the roundtrip to and from the server then.


This is neat and shows great potential, I hope in future that Adobe products would be integrated with WebAssembly or even old products converted to WebAssembly and running in the browser. We shouldn't be that far off.


Are you thinking like Photoshop, or a sandbox for Flash?


  enlarged memory arrays from 536870912 to 1073741824, took 28094 ms


That's a bit excessive. I got the same with "57 ms", and quite a lot of my 8GB RAM (and several GB of swap) are used.


17ms and 533MiB. Seems like quite a lot of variability. This was Chrome on Mac OS.


Now we just need to compile a browser to webassembly so we can run a browser in a browser :) then the browser will officially be a complete operating system.

On a serious note though, this is an awesome example!!


This is very cool.

It seems like there's some sort of memory leak, though. If you press the Run button over and over, you'll get a little message every 2*N runs about expanding memory arrays by double.


So, until when can we build firefox and run it, in chrome?


sorry for negative comment: ) it crashes Firefox ) very slow *) "1"s c_str? srsly??


Just tried it on Firefox, working fine. Changed the code as well, and still compiles.


And it only uses 400MB of RAM.

Neat though.


So the web browser is pretty much a mini operating system. Firefox and Chromium seriously feel like they have the longest compile time out of any package in Gentoo, except maybe Libreoffice. It has tons of embedded packages that it doesn't pull from the system/native (jpeg and png decoders and such).

So with a lot of these neat things where we compile stuff or run a Linux kernel in the browser, we've pretty much come full circle? It's like running cygwin in wine or an NES emulator inside of Windows 95 VM.


I have been thinking about why we have ended up here, and why not just native apps.

The obvious answer is that it makes applications portable, which is great. The other key component I think is delivery. You don't ever install anything, it just exists when you ask for it. That is something native applications have never done, and not even something like JVM has done even though it addresses portability too.

It is also nice that it is seamless with the browsing experience, but I have to wonder if it hasn't just turned out that the typical browser interface is a better interface for computing than most popular OS have been? As the OS and browsers start to see parity, are there lessons from browsers that we can integrate into the OS?


The answer is that browsers provide something that users desperately need and no operating system has ever provided: a sandbox strong enough to run completely untrusted code. The success of browsers is an indictment of the entire field of operating systems research. They have either failed to recognize the need or simply failed to deliver that kind of security.

For one example take WebGL. For decades OpenGL had been a native API prioritizing performance first and security last. It wasn't until browsers decided to expose OpenGL that the necessary work was done to make a graphics API safe for untrusted code. And who did that work? The browsers had to do most of it themselves.


> The success of browsers is an indictment of the entire field of operating systems research.

Rob Pike wrote in the year 2000 that systems software research was irrelevant (http://herpolhode.com/rob/utah2000.pdf), and it seems to me that not much has changed since then. How many ideas from the last few decades of operating systems research have actually been put into practice?

The real problem is more fundamental: people want their software to keep working. As Rob Pike put it in the talk linked above, "to be a viable computer system, one must honor a huge list of large, and often changing, standards: TCP/IP, HTTP, HTML, XML, CORBA, Unicode, POSIX, NFS, SMB, MIME, POP, IMAP, X, ... With so much externally imposed structure, there’s little slop left for novelty."

For example, I'm sure some software out there is using the OpenGL API in a fundamentally insecure way. Changing the API to be safe would break this software. And maybe that would be a good tradeoff, if reworking OpenGL were the only thing you needed to do to safely run untrusted code. But almost every part of the system would have to change. You'd be left with a system which breaks or degrades pretty much everything you try to run on it.


Native apps had a head start: they don't load every asset from a potentially untrusted source.

I would say that the browser most certainly has the biggest attack surface of any software in regular use. Your average browser is insanely complex. In fact, it is the only piece of software on my computer that scares me.

Multiple JITs, font rendering, parsing, layouts, compression, image handling, sound and of course about a billion edge cases because somewhere along the road we decided that faulty code is OK. Everything with tonnes of state that interact in ways that can't ever be fully tested or verified (because of an almost infinite variety). Let's also not forget that a large part of that is done at the very bleeding edge of CS research. I wouldn't trust anyone to do that in a safe way.

To me, a kernel seems simple in comparison.


> Native apps had a head start: they don't load every asset from a potentially untrusted source

The whole native app is "untrusted" at the point of install. Even an app store offers a fairly thin guarantee about what apps are actually doing. It's far easier and less risky to open a web page and start doing something than install a native app.


Yes and no. Trust on first use is a thing. A web page is trust every time.

Sandboxing of native apps are getting easier by the minute , at least for Linux. Flatpack and snappy aren't there yet,though...


I only think this applies to certain ecosystems (ones where the motivation or incentive to provide the software doesn't align with ensuring user's safety or security).

The native apps I run on my Linux laptop, for example, I trust quite a bit. There may be a way for someone to sneak in some obfuscated code that does some harm, but I think the risks are much lower than many of the alternatives.


> biggest attack surface

Sure, but in many modern browsers there's also a OS sandbox around it, so there's two layers of protection.


> a sandbox strong enough to run completely untrusted code

No browser has provided that either. In fact, browsers are by definition a security anti-feature because they are always connected to the network and the customer/user data lives on the network.

Even if they manage to solve the problem of computers getting hacked when visiting websites they are very unlikely to solve the problems of tracking and private data vacuuming.


I've been using the web for over twenty years and it has never harmed my computer or invaded my system in any way. I do think I've seen adware and scumware that may have originated on the web on friends computers but I'm pretty sure they did something like allow a toolbar to be installed. Can you give an example of an exploit that did not require user permission,did not use a plugin, and that allowed arbitrary code execution from the internet ?


> I've been using the web for over twenty years and it has never harmed my computer or invaded my system in any way

How do you know? Basically every browser has a RCE vulnerability a month and has for 20 years. From ancient JPEG vulnerabilities to modern video codecs and DOM manipulation vulnerabilities.


Anecdotally, I've never experienced remote code execution through a browser either. However, the most widely used browser for many years did have plenty of remote execution vulnerabilities that did not rely on plugins or toolbars [0].

[0] https://www.cvedetails.com/vulnerability-list.php?vendor_id=...


Exploiting remote execution vulnerabilities is doable, but why do that when one has the easy route of exploiting gullible users instead?


Would the recent Spectre vulnerability be an acceptable example for you?

There are a gazillion others, but this one is nice and sweet and exposes the whole web app strategy as completely bankrupt from a security perspective.


Yes. The web itself is harmless; it's the stuff you download and run that harms you.


Guns aren't dangerous! It's the bullets that will get you!


This is not an appropriate analogy. Guns and bullets are designed to kill, but browsers and the download capability are not designed to infect your computer. Rather, they are designed with an opposite goal.


An "infection" in this case would be the execution of some other person's code on my computer. Browsers have been designed specifically to do this since Javascript was invented.

The only difference between malware and "regular-ware" is whether the code meets certain user expectations about what it will and will not do.


Yes, exactly. What's your point though?


Browsers do a whole lot better than the status quo for apps in the '90s and early 2000s, which was "offer a Windows .EXE to download, and monetize with spyware".


Yes, they are much better than your example, which is one of the worst things that could happen. :)

But paid software in the 90s and 2000s was fine, no network connectivity, didn't even check for updates. Heck, a lot of freeware was fine.

Then the market changed and spyware as a service became very profitable. The genius of this Spyware 2.0 was that it offered some benefits while not abusing the collected data in any obvious ways, like stealing your money. Browsers are helpless against this, in fact they are enabling the whole business model through their support of web apps and increasingly complex APIs.


Web apps are already spyware by default, as everything gets done on a computer I don't have access to.

Just plain pretty TTYs.


There is a world of difference between analytics and tracking and, say, CoolWebSearch. You don't see Web apps hijacking your home page, filling your bookmarks with casino ads, installing toolbars, or replacing competitors' banner ads.


> The answer is that browsers provide something that users desperately need and no operating system has ever provided: a sandbox strong enough to run completely untrusted code. The success of browsers is an indictment of the entire field of operating systems research. They have either failed to recognize the need or simply failed to deliver that kind of security.

The JVM more-or-less managed it. Modern hypervisors do it - you could use something like Qubes and run completely untrusted code in its own isolated VM. It's not that academics weren't working on this stuff, adoption is where it fell down.


The JVM took too long to achieve that ideal in a way that actually worked for end-user applications. Early java-based client applications had a horrible user experience. Version conflicts were very annoying, especially on windows where you didn't have a command-line culture comfortable with simply switching between versions by modifying PATH and JAVA_HOME. And even when you didn't have version conflicts, Java applications had slow, idiosyncratic GUIs that often did not support basic and common features like ability to copy/paste or switch to arrow-key navigation through menus.

And on top of all that, it would still be more memory intensive than an equivalent native win32 application. Developers usually only shipped java-based UIs to Enterprise Software users who had no choice but to use what their bosses told them to use.


Browsers have worse history with compatibility, they are more memory hungry than Java, performance is worse, their UI isn't even trying to mimic system UI (Java could have system L&F for 10 years at least, likely more).


It is a bit of a chicken and egg problem, though. It's not enough for researchers to write a paper about APIs that should exist. The APIs have to get implemented across major platforms, which requires major applications using those APIs to motivate them. I don't see how it would ever have happened with a giant slow incremental process, and the web has been that process for us.


* without


I agree a lot with your comment. But it should also be noted that users have very little choice on what code they run when surfing the web. So in one way, browsers cause the problem they then have to solve.


Aren't browsers basically the largest attack surface on most platforms now? Have they really succeeded in doing this?


Saying browsers are "the largest attack surface" is an indication of ubiquity, not an indictment of the design or implementation of browsers (which are still worth considering separately). And we only make such remarks about browsers (or other apps that execute untrusted code like spreadsheets or PDF readers) because the state of OS security is such that any program running directly can pretty much wreak havoc even without explicit superuser privileges. Even on locked-down mobile OSes, the amount of havoc your Facebook app gets up to is incredible compared to what happens when you merely visit facebook.com on your mobile browser.


It seems like most users don’t really understand that they’re really running a lot of code for every page they visit now though. IMO it’s as prevalent as it is because browsers just run code by default. If iOS and Android apps were automatically launched without explicit installation by visiting URLs, they’d probably have just as much security out of necessity.


" because the state of OS security"

Those not designed for security were like that. Those designed for security had few of those problems. They just weren't popular with the big ecosystems and such since the demand side didn't care about security much. Very little security bolted onto something that was opposite at core. Common solutions were memory-safety for apps with validation on API calls, limiting of API's accessible, mandatory access control, and/or VM's isolating whole systems from apps needing protection.

OS's and browsers are not only becoming similar in functionality: they're similar in why people adopted them and why they're insecure.

"Saying browsers are "the largest attack surface" is an indication of ubiquity, not an indictment of the design or implementation of browsers"

It's actually saying both since browsers were mostly not designed for strong security or apply security engineering techniques of the time. That would be POLA, privilege separation, memory-safe languages, high-quality code in any components integrated, and so on. The first I saw attempt it was Chrome's Native Client imitating some benefits of OP Web Browser [that was designed for security] but weakening them for performance. Latter was Chrome's highest priority IIRC. There's Quantum moving memory-safe code into Firefox. However, browsers are mostly insecure architecture and code that just gets patched as problems are found. And they're ubiquitous. Music to malware authors' ears. :)

I'm including examples below of security-focused, browser architectures applying various methods of security at design and/or implementation stage so you have a mental point of comparison to current ones in terms of techniques employed. They were released as prototypes with nobody putting any effort in past that. So, high-assurance sector just isolated regular browsers in protection domains (eg VM's) on separation kernels or using MAC if browsers had to be there. Otherwise, native apps in memory-safe languages with regular old client-server architecture were much easier to make reliable and secure. Especially if using middleware designed to help with that. That's still true.

DarpaBrowser http://www.combex.com/papers/darpa-review/security-review.pd...

OP and OP2 Browsers https://pdfs.semanticscholar.org/832a/911f97b500cd2df4680186...

Microsoft Gazelle https://www.microsoft.com/en-us/research/publication/the-mul...

Illinois Browser Operating System https://www.usenix.org/legacy/event/osdi10/tech/full_papers/...

Quark Browser http://goto.ucsd.edu/quark/


You can add even more pros to asm web apps.

- Sandboxed, web apps can't wander around in your filesystem, and you can even prevent them from knowing anything about you (private mode). That's good for privacy and security.

- You bypass the apple/google store, their paytoll, capricious criteria and how they promote apps

And I think asm apps (not websites) will have a great success when asm has become a full VM not just because of these reasons, but also because it will make it possible to develop cross-platform in a language that isn't javascript. Some people love javascript, I personally hate it. C++ devs will be able to develop in C++, I expect C# and Java dev to be able to write their apps in C# or Java, and I am sure even more languages will join the party. That's a great thing.


Private mode is about hiding a trail from yourself rather than the websites you visit. They still get all your request logs.


When you launch private mode (save for browser fingerprinting and other super cookies), you are a new user to the website. When you close it, anything the website attempted to store on your machine is gone. Of course it can track you during that session but not before or after.


Unfortunately techniques like browser fingerprinting[0] still work in private browsing and can be used to link your private browsing to your normal browsing.

[0]: https://en.wikipedia.org/wiki/Device_fingerprint


Can, but usually aren’t. Advertisers aren’t interested in advertising to you unless it’ll “work” (in terms of sales, clicks, or whatever the KPI is). If I think you want to not see my ad, I don’t want to buy your eyeballs.

My own experiments with GL and the timing-based methods is that they just don’t work well (compared to say, cookies) when delivered via an advertisement. Plugins and fonts work very poorly as well, lately.

I don’t think anyone is using these methods to target advertising, and state-level actors don’t have to (they just bug your ISP).

Who are you trying to protect against?


Don't forget sandboxing. Browsers routinely execute unsafe scripts (hopefully) without compromising the user's account or event the browsing session. OSes are still far behind.

I always say that this situation is a bad solution to a bad problem. Had Apple, Microsoft and the FOSS world agreed on some standards, native code could be portable and the norm.


This seems to follow natural evolution, everything coming and going in waves, each time reaching a new evolution and better version of that older wave. Before personal computers, terminals were the thing, just connecting to a mainframe and executing programs there, then we reached the state of standalone computers being offline for a while mostly, then the Internet boomed really fast and computing went more to server/client model, and now we are going towards back loading code from a server, but having more intelligence in the client and executing more offline.

It's interesting to notice these things repeating themselves, I bet at some point maybe the OS and the computer merge more and we will have again more systems like we did when the Internet became to rise.


It's definitely far behind where web apps are for instant no-install access, but Android has their "Instant Apps" feature providing similar capability https://techcrunch.com/2017/10/19/google-play-adds-android-i...

Disclaimer: I don't have an android phone anymore and I have no idea how available/useful these actually are


Extremely far behind schedule, but for the apps it does work for it's not even that bad. The android app architecture seems to require some heavy restructuring to do this though, so I don't think it will be attractive for developers. The portability problem of course holds this back from the insane adoption the web has gotten as well. Basically it's too little, way too late.


since browsers originally only displayed documents, some of the assumptions are nicer than the assumptions that came out of desktop metaphors. For example, most UI text in a browser can be copy/pasted, and that's almost never true in native UIs.


> You don't ever install anything, it just exists when you ask for it. That is something native applications have never done

Actually, this used to be common, and still is for things like game consoles. You used to stick the floppy disk in, and run the program directly from there. No concept of installing anything. Indeed your computer may not have even had writable storage to "install" anything onto. At worst, if you did have a hard drive, you were free to copy the (single) executable onto your drive so you didn't need to rummage through your stack of floppies to find the program every time you wanted to run it. That was the extent of installing something.

The situation where your program consists of multiple files, each of which had to be copied into some special location on your hard drive by an "installer"--is a relatively new (since the 90s?) concept.

It's actually refreshing nowadays when I stumble upon one of those rare applications where everything comes in a single executable file and it can be run from anywhere. Dying breed.


The obvious answer is that web-based apps are effectively access restricted so the developer can charge any price they want including free, change prices at will, and enforce it effectively over the long term.

Native apps struggle with this.

Things that are easy to monetize tend to attract more effort than things that are possibly impossible to monetize, or have very limited monetization options.


It's optimizing for the path of least resistance. No context switching, no decisions necessary, just click and go.


I think delivery is a bigger deal than portability. There are many ways to make applications portable in 2017. The big deal is that browser SPAs don't need to be installed, they just run when a user clicks a link or loads a page while browsing the web (something they already do anyway).


Alan Kay has an excellent and relevant talk as to why we shouldn't have ended up here (from 1997), which I think will answer your questions.

https://www.youtube.com/watch?v=oKg1hTOQXoY


It also makes it hard to pirate your software. Cracking native software is relatively trivial which is why legitimate licenses cost insane amounts. Contrast soundtrap charging $15 a month for their web based DAW to Ableton having to charge $750 to make up for piracy costs.


On the flip side of that coin, as a user, it also makes hard to control your own software - you are not in control of the version you are using, not in control of the features the program has and above all not in control of the availability of that program as whoever owns the server you are accessing it from can revoke that access at any point for whatever arbitrary (but of course always mentioned in the Terms and Conditions that nobody reads) reasons.

I'd be aware of promoting anything that takes control away from the user. The web is fine for sharing knowledge and communication (and pictures of cats) but let's not put everything in it or we'll certainly regret for giving away the power to control our software.


That assumes it would be impossible to download a WASM blob and run it natively, or that no one will ever provide source code for their binaries which you could potentially edit. It's not a zero-sum game.


I believe we have already enough evidence, both by the existing web applications and by the other RIA technologies available over the years (Applets, Flash, Silverlight, ActiveX, etc) that this is a very safe assumption to make. There are exceptions to this, of course (i myself made the source code of my 3D engine for Flash written in Haxe available years ago), but those are extremely rare.

Besides the experience the original GP post was talking about and the message i replied to didn't really leave any room for not making such an assumption since they were all about comparing web apps to native apps: " I have been thinking about why we have ended up here, and why not just native apps."

And finally, i do not see WASM being of giving any more control to the user than existing web tech - after all it relies on the rest of the webpage to work, much like JavaScript (actually, it does require JavaScript to act as a mediator - at least for the time being) and considering how you cannot download Google Docs as a native offline application to use on your desktop, despite being able to save locally the JavaScript it downloads on your browser, i fully expect to be the same situation with WASM too.


Drawing inferences based on other technologies isn't really evidence.

WebAssembly is open and it's intended to be a general purpose compilation target, the technologies you listed are proprietary or only intended for a single language. Also, WebAssembly's own docs state non-web embeddings as an explicit design goal[0].

As there is no commercial need for a particular corporation to prevent it, and there is no design limitation preventing it, there's no reason to assume some sort of native WASM runtime won't be available in the future. That would make it no different than Python or Ruby or any of the multitude of C++ runtimes I have to download with games on Steam.

My point being that there's no reason to assume WASM has to run as remote code connected to a server over which the user has no control or meaningful access, as opposed to a binary you can run on your desktop. There's nothing in the WASM spec that forces such a distinction to exist, that's an architectural decision made at the application level, and one that could be made in any language.

[0]http://webassembly.org/docs/non-web/


I think we are talking about different things, you seem to be talking about the possibility of using WASM as a generic virtual machine in the same vein as JVM - in that case, if there is some sort of runtime that would provide the ability for a program to be made to work both in web mode and offline mode and the developer decides to make their application available through it, then yes it wont be an issue. After all people already do that with JVM, Flash and even ActiveX (which was originally meant for offline use).

But i am not talking about such a use, i am talking about using WASM to create applications that are designed to be part of a web site and meant to be accessible only online - pretty much what the majority of Flash games did previously. The original message i replied to was about web-based applications that are meant to be delivered online so my messages were with the assumption that we are talking about web-based applications pretty much in the same vein as Google Docs or games like the thousands of Flash games you'd see out there some years ago. You wrote "That assumes it would be impossible to download a WASM blob and run it natively" - the possibility to do that would be exactly the same as downloading a JS file and run it natively (in this case i take "natively" to mean running an offline local copy in a browser without any sort of external requirement, like downloading files off the web). I do not see a reason for expecting WASM to be used in any different way.


1. Ableton is a great DAW with a long history used by many well-known producers. They can and deserve to charge more than 15 bucks. It's laughable to think that anything other than a todo app would cost so little.

2. It's not $750, there are multiple editions available, the more expensive ones include more plugins, instruments, loops, etc.

3. Cloud-based startups are nice until they reach the end of their wonderful journey, sell out and you're out of a tool.

Soundtrap is charging $15 because that's what it's worth. Although if the cloud is mandatory it's worth zero in my opinion.


This, a hundred times. An insane amount of work has gone into making something like Ableton Live. The general idea that applications should cost something in the range of 2 - 30 usd is ridiculous, when you compare it to any real world physical examples.

For example any instrument costs much more than that, and the same time could have gone into developing the software and building the instrument. Or heck, any handbag or fashion item can cost a ton of money, and people are ready to buy them, but when software actually costs something close to real value produced by that software, people get pissed these days.

I blame the iOS/Android markets that have created a biased and untruthful image for the costs of developing software. You can't expect to make a living from developing software that costs 1.99/2.99/9.99 unless you sell a ton of those, and most never do, but this market has effectively changed the mindset for normal consumers to get things either free or for a price of three cups of coffee.


>I have been thinking about why we have ended up here, and why not just native apps.

In my opinion its because operating-system design wasn't sexy after browser-companies came and ate the OS-guys lunches with all that big bubble fuss. So the OS guys went all apathetic and forgot that a majority of their OS is a freakin' browser, and .. here we are. Inception.

Its great we can compile C++ to 'native' assemblies now. We've always been able to do this, btw. The only difference is the delivery mechanism ..


> not even something like JVM has done

Not counting applets and Web Start?


Well, perhaps to be more accurate,

> not even something like JVM has done successfully.


It was such a bad idea that Google now has Instant Apps on Android and ChromeOS.


The obvious answer is that it makes applications portable, which is great.

That's an understatement. Anybody trying to write a multiplatform complex desktop application around 2000 would have learnt that it was way much harder that it sounds today. Browsers were in a very reduced group of success cases.

Edit: also dont forget that there was a war about what tools and languages would become dominant. Unlike alternatives, most web tools were free.


> are there lessons from browsers that we can integrate into the OS?

Windows 10 is working on adding tab navigation to all windows: https://arstechnica.com/gadgets/2017/11/tabs-come-to-every-w...


That's the conclusion I also came up with. It's pretty clear that the browser is the new platform for apps.

Though I wonder why Firefox OS has failed this hard. I never looked more into it, but isn't it a platform centered around having web apps as native apps? I think some day every OS will be similar, so was it ahead of time?


And things like dropbox allow the OS to go in the other direction, towards being distributed.


   The other key component I think is delivery. You don't ever install anything, it just exists when you ask for it.
That is precisely why I made the jump from Delphi to ASP.NET back in 2003, and now the jump to Angular2+.


Why Angular instead of React...or something else like Vue?


It's the first SPA framework I've learned and fell in love with it. I haven't gotten around to researching React or vue yet. I wonder if I'm missing out on anything by not using them.


> You don't ever install anything, it just exists when you ask for it

Relevant XKCD:

"Installing" https://xkcd.com/1367/


> I have been thinking about why we have ended up here, and why not just native apps.

That's what Java was supposed to be.


Jvm certainly tried this with applets


Most users spend >80% of their time in the browser.

It's probably that simple.

If you have a thing and you need users, are you really going to deliver it as a Qt app? Unless you're bitcoin, it's hard to think of a case. I'd say "or a game," but many are webapps. And at this point some of the hollywood-grade games might figure out a way to be coming to browsers soon. Unreal did it.


It is that simple. And a bummer because I think HTML is a pain for things that aren't documents. And JS UI frameworks feel like hacks too.

Unity already supports the web, but I don't think that'll matter for AAA games. People who buy computers specifically to play them will care about the slight performance hit and not want to keep the games they bought and spend a lot of time with as bookmarks to web pages. Non-performance intensive tools, even large ones, have already moved to the web. This will likely continue because there is not other platform delivering its level of portability and ease of delivery.

It's a bummer because the web just kind of sucks. It was made for documents, damn it. Something like a java browser could've succeeded as jars are already very portable. This could've worked pretty well for small and large tools alike. But since the HTML stack is already widely adopted I guess we'll have to make the pain go away somehow. At least webassembly solves the problem computationally intensive tasks. The UI component is still quite a mess, but that's getting better too. I just find it annoying that they have to call everything Web-(what it traditionally was). It's basically the new reality, being used on non-web platforms as well as on the web.


Unity also has the slight delivery hitch of needing the browser plugin to run games in the browser. I tinker with Unity and I still didn't install the browser plugin.

That is the same hitch the JVM had. For a non-technical user, that extra step feels risky and unintuitive. "Why doesn't it just work?" feels a bit unfair when you know what's going on, but to a normal user I think it's a fair question. We are the professionals after all, why can't we make it "just work"?


Yes. Please, someone, make gamedev in the browser a reality. There are a thousand hitches: Why can't I install a 9GB game on the browser? We need all that space for textures.

"But what if –"

Oh come on, we can figure it out. There must be some way.

Also UDP. Yeah, we have WebRTC, but look up Beej's guide to network programming. That's the threshold of intelligence required. Currently to get WebRTC working with UDP for gamedev purposes, it's just... Not easy. Maybe not even possible. I don't know.

We need UDP and we need an install manager. And a few other things. Start thinking about how to extend the browser so we can deliver world-class experiences to gamers.


I'd rather not see (offline) games in the browser, or any other online-based method, personally. I want to have control over the games' data files so i can copy them to my external hard disk, be able to run them in 10 years without hoping some server is still online and their owner gives a damn about stuff they made 10 years ago (most stop giving said damn within a couple of years, let alone 10), etc.

Let's not forget that the cloud is really someone else's computer and relying on said cloud means relying on someone else for your stuff.

And let's learn a little about all the thousands of Flash games that are in the process of being lost forever with the their hosting sites now shutting down.


I don’t understand why you object to browser games but use flash as an example of what could happen? First of all, tons of flash games are archived. Second, how is the web worse than a compiled binary that you don’t have source code access? Games that are made for the web are easier to preserve and run in the future than native games by a long shot. Would you rather run a browser that can play all of the games on the web in a backwards compatible way or rely on emulators for previous operating systems or game consoles?


> First of all, tons of flash games are archived.

Tons of them are archived but these are a tiny minority, at its hayday there were hundreds of Flash games released per day. A lot of that stuff are or will be lost forever.

> Second, how is the web worse than a compiled binary that you don’t have source code access?

The web is worse because for the application to run it needs to be loaded off that web site and the user has no control over that. I cannot for example preserve Google Docs or any other web application i like - i just have to hope that it'll remain online, will keep working with my computer without issues like slowing down and wont change its UI to something i dislike (people often use older versions just to be able to run or because they like the older UIs better).

> Games that are made for the web are easier to preserve and run in the future than native games by a long shot.

How can you preserve a game that you have not full access to its files? Only the developer has that power and developers for the most part have shown over and over that they do not care about the games and applications they make after a couple of years or so.

I have games from companies that closed doors more than a decade ago, if those games were web games, they'd be gone the moment the company shuts down. Or more likely, the moment the company decides they do not want to pay for the bandwidth of accessing the game anymore, like what happened to several MMOs (many of whom are perfectly fine playable solo) already.

> Would you rather run a browser that can play all of the games on the web in a backwards compatible way or rely on emulators for previous operating systems or game consoles?

I'd rather rely on native OS support (above all), compatibility layers like dgVoodoo2 (for 3DFX and older DirectX compatibility in Windows) and Wine (before dgVoodoo2 it was the best way to play old 3D games for Windows) and of course emulators. Remember the games i mentioned above about companies that do not exist anymore? Several of them are for DOS and running perfectly fine in DOSBox.

As i wrote in the message you replied to, it is all about who controls the files and personally i want to have full control of them because i do not trust the developers with that control - they have proven time and time again to not be trustworthy (not necessarily because of their fault, of course, but that is little importance when you lose an game or application you like).


If you have an HTML5 game that you own and it doesn’t require a server (it runs entirely on the client side) then there is no technical reason it can’t be archived.

You can’t run google docs offline because what would that even mean? One of the main features of google docs is the live collaborative editing. It’s a multiplayer game that requires a host server.

You can architect your game to be friendly to being archived or you can make it completely impossible by introducing a dependency on the network. This is true for native games and web games.


> If you have an HTML5 game that you own and it doesn’t require a server (it runs entirely on the client side) then there is no technical reason it can’t be archived.

If a game is fully available on the client side - it doesn't just need to run on the client side, but have everything available on the client side, data files and all - then yes you can archive it. But this is a very limited case and so far only the most simplistic games can be distributed like that. And TBH considering the DRM craze that games have, if anything i'd expect companies to try and make such archival as difficult as possible.

> You can’t run google docs offline because what would that even mean?

Having a word processor, spreadsheet, etc available offline.

> One of the main features of google docs is the live collaborative editing. It’s a multiplayer game that requires a host server.

It is a feature but i'd say that the main feature is being able to edit documents. In terms of games, it is a game with multiplayer features that still requires a host server for its singleplayer part (which is generally something that is frowned upon).

> You can architect your game to be friendly to being archived or you can make it completely impossible by introducing a dependency on the network.

You are talking from the point of view of the developer, i am talking from the point of view of the user. It should be obvious from when i wrote "How can you preserve a game that you have not full access to its files? Only the developer has that power and developers for the most part have shown over and over that they do not care about the games and applications they make after a couple of years or so". It is the user that suffers from such choices, not the developer.

Please try read my posts with the eyes of a user, with the concerns of a user's and try to avoid any developer bias.

> This is true for native games and web games.

Technically yes, but things aren't black and white - there are "natural" tendencies in each approach with the web approach leaning heavily towards network reliant applications and the native approach avoiding it.


> is a feature but i'd say that the main feature is being able to edit documents. In terms of games, it is a game with multiplayer features that still requires a host server for its singleplayer part (which is generally something that is frowned upon).

Uh, before Google Drive, I'm not sure people were looking for a substitute to Word or other word processors. The "Killer Feature" of Drive is that multiple users can collaborate on the same document, hosted online, in real time. If anything, Google Drive has less features than Word overall. It's having "one true source" instead of emailing around mutliple drafts that is the killer feature for most people. Or having access to that document across all their devices. You could build an offline text editor in the browser, but why would you? (Then again, I use VSCode every day which runs on electron and works totally fine offline).

Edit: I just realized, you CAN run google docs offline. https://support.google.com/docs/answer/6388102?co=GENIE.Plat...

> Technically yes, but things aren't black and white - there are "natural" tendencies in each approach with the web approach leaning heavily towards network reliant applications and the native approach avoiding it.

The native approach has it's own "natural tendencies" that are anti-user, but I'll agree that yes, there are a lot of games that are unarchivable. I don't disagree with that.

> I'd rather not see (offline) games in the browser, or any other online-based method, personally.

This is the statement that you made that motivated me to respond. This is why I tended to ignore what some businesses might do and focus on the developer perspective of "what is possible with these technologies." Because we're already talking about single player offline games.

I guess I'm focusing on the developer side because you seem to be focusing on all of the anti-user things that game devs might do, to which I say "aren't they already doing those on native apps?" I just don't see how the web can make the situation worse. Everyone isn't going to want to play every game in the browser. However, there is great potential for multiplayer games, there is great potential for single player games that compete with mobile (which is by far the most anti-user platform of all!). Yes, people will use for other things, but they're already doing that without the web's help anyway.

Another Edit: > If a game is fully available on the client side - it doesn't just need to run on the client side, but have everything available on the client side, data files and all - then yes you can archive it. But this is a very limited case and so far only the most simplistic games can be distributed like that.

This is increasingly possible with WASM and WebGL.


I think we're focusing too much on the specifics of Google Docs, replace Google Docs with any other web application that has a desktop equivalent.

> Edit: I just realized, you CAN run google docs offline. https://support.google.com/docs/answer/6388102?co=GENIE.Plat....

Yes, but i'm not talking about being able to run something offline, i'm talking about being able to take Google Docs, put it on a CD, DVD, external hard disk or whatever and have it working in 10 years or whatever independently from Google's servers, pretty much how you can do today with -say- Microsoft Office 95 (or any other desktop application available in downloadable or physical format).

> I guess I'm focusing on the developer side because you seem to be focusing on all of the anti-user things that game devs might do, to which I say "aren't they already doing those on native apps?"

> [...]

> This is increasingly possible with WASM and WebGL.

It was possible ever since the days of Netscape when could encode data as arrays in JavaScript and you could make (simple) games with all assets in a single easy to copy around HTML file since Internet Explorer 8 was released.

WASM isn't any different from JS when it comes to how data is accessed. Native/offline games (and other applications) have their data stored locally, all the files are available there and to do anything else it needs the developers to go out of their way to achieve that - which is why practically nobody does such a thing. Web-based games would have their data stored remotely so even if you download the entire code of the game on the client side, you still cannot archive it because you'll practically never have access to all the necessary data files (imagine an open world game, for example, that downloads assets as you roam the world).

> possible with WASM and WebGL

And to try and make myself clear, i am not worried about the possibility of doing something, i am worried about the impossibility of doing something: ie. having full control over the software you are using.


It's been proposed a few times[1], the WebRTC crowd just shows up and says that WebRTC is "good enough" and the cycle continues. Never mind that the dependency graph for WebRTC is way too large and that WebRTC is just too complicated for GameDev needs.

[1] https://github.com/networkprotocol/netcode.io


I’ve done webrtc “servers” (in C, not some nodejs bindings for the chromium implementation) and webrtc P2P. It really is “good enough” for a lot of things.

But not game developers, who are used to dealing with a very forgiving environment that will let them do stupid things quickly.

I have watched game developers in earnest busy wait a network thread “for speed reasons” but also derive approximate linear solutions to nonlinear problems using overflow, wraparound, and so on.

Some of the dumbest guys I’ve ever met doing some of the cleverest things, or the smartest guys showing off just how dumb sleep deprivation makes you. I’ve never been quite sure.

Games are beautiful. Game developers are beautiful. Game code almost never is.[1]

Joining a new game dev team you think this is going to be it, finally some clean code the way the masters do it, but then it isn’t.

That is to say: Games aren’t how to do software development, but proof of how far you can get by doing software development wrong.

I have thought about packaging my webrtc “server” as a library and selling it to game dev shops, but I don’t think the professional browser market is there yet.

[1]: I’m aware of a small number of examples to the contrary, but there are many more counter examples.


Congrats, you've just gone ahead and proved my point. Zero technical reasons other than an ad hoc attack on game development practices.

Unless you have a client library that can be paired to that server which takes up less than 200kb of mem you won't see adoption. You're going to want to use one network stack across your products and a lot of the handheld and smaller platforms are really memory constrained.


> Zero technical reasons other than an ad hoc attack on game development practices.

These have been brought up ad nauseam so I didn't think it was useful to go over them.

The biggest reason UDPSocket doesn't just show up in browsers is that:

• Browser vendors don't want to accidentally make it easy to trivially D/DoS servers

• Browser vendors don't want to accidentally make it easy to trivially D/DoS clients

• It's not clear how to handle NAT traversal except when (interactively) trying to traverse NAT

But let's say we don't need anything more complicated than basic hole punching, P2P or cross-origin UDPSocket, and we can solve the trivial D/DoS issue with some kind of Allow-UDPSocket-From HTTP header, or OPTIONS or whatever. And maybe we have some kind of weird HTTP/2.0 transaction that does TURN/STUN automatically. Maybe.

Then we can have UDPSocket and you can implement netcode in JavaScript if you want!

However, WebRTC has already solved all those problems. Including the NAT one (that SDP/STUN/ICE stuff is there for a reason!). And obviously P2P, but also including some you probably haven't thought of, like issues around IP fragmentation (1200 byte packets!? DTLS fragments things itself for a reason!), and whether rolling custom crypto was a good idea.

> Unless you have a client library that can be paired to that server which takes up less than 200kb of mem you won't see adoption.

200k should be plenty: The dumb client can speak a slightly trimmed down WebRTC (e.g. limited to only a single codec) to keep the size down. My webserver which is just about as fast as they get, is only about 1kb of code on the hotpath.

But this doesn't answer my questions about a market: I don't sell to game developers, and I genuinely don't know if building a tiny+performant WebRTC client and server business is worth my time.

Is it?

How much will a game development studio pay to have this problem solved for them?


DDos is already handled cleanly by something like netcode.io using access tokens.

Anything more than the most trivial NAT traversal isn't needed since any online game worth their salt uses dedicated servers in order to prevent cheating and gamestate manipulation.

The fact that you're bringing up a 1200 byte packet limit shows that you have no understanding of the domain space. Most networking stacks keep the packet size well under 200 bytes since they don't want to saturate the connection and dead-reckoning deals with state reconciliation.

Binary size also matters, so the fact that you want to bring in a whole codec when it may not be wanted isn't a great selling point either.

I'm sure there's a market out there given how RAD Tools and other middleware companies do just fine. However I don't think you know enough about the domain to be able to sell it successfully.


> DDos is already handled cleanly by something like netcode.io using access tokens.

I can set up a netcode.io server and cause it to list any IP addresses I want as "server addresses", then cause a netcode.io client accept that challenge token and permit traffic to that address. If netcode.io is widespread (i.e. in browsers) I can buy some advertising to knock out any client or server I want.

Unless you are in advertising, you might not be aware I can do that: I can purchase something like a million clients for something like $10. Knowing how certain formats work make it possible to purchase more traffic for even cheaper.

These attacks are well known to Google and the WebRTC designers and as such, is not possible with WebRTC: The browser does not permit traffic at volume until ICE/SDP negotiation is complete which means that signalling must have occurred.

By simply limiting the netcode.io clients size to the games that use it (i.e. games distributed with one of the non-browser implementations of netcode.io), this attack is mitigated substantially, but as soon as we try to use netcode.io as a "simpler webRTC" it falls flat on it's face.

Splat.

Perhaps we are lucky that the window.netcode browser extension isn't more popular, or that all those users have ad blockers.

Anyway. The workaround I suggested would probably be sufficient to stop this attack as stated, but it requires a netcode 1.1 or maybe a 2.0 since all clients and servers need to be upgraded. And there might also be other attacks.

> Anything more than the most trivial NAT traversal isn't needed since any online game worth their salt uses dedicated servers in order to prevent cheating and gamestate manipulation.

What Cisco calls Dynamic NAT doesn't work with this scheme. It is popular. Perhaps these people don't play games because game servers don't support their network configuration, or perhaps these people don't play games for another reason.

Non-game uses of UDP are very interested in talking to these networks however. It will be difficult to get buy-in from other parties interested in UDP unless you solve these problems.

> Most networking stacks keep the packet size well under 200 bytes since they don't want to saturate the connection and dead-reckoning deals with state reconciliation.

IP packets fragment once they go above a certain size. That size can only be discovered by experimentation, but will never be smaller than 576 bytes. Many networks block the standard discovery process (called Path-MTU discovery) for misguided reasons, but protocol developers still have to deal with it.

If your UDP packets are bigger than this size, and everything else is working, then losing either fragment will delay the receipt of the datagram and waste kernel memory. Developers who can do their own fragmentation smarter use setsockopt+IP_DONTFRAG and save everyone a potential denial of service opportunity.

If all you want to do is avoid head-of-line blocking on the player's network connection, then multiplexing several TCP connections (even websockets!) will provide the same latency and throughput guarantees that UDP will, getting correct NAT behaviour and resistance to D/DoS for free. Simply drop and reconnect any link on packet loss (trivial to detect on either side), and you only need enough connections to handle the maximum number of dropped packets you can tolerate within the time it takes to reconnect.

I've seen at least one game on HN use this trick in the last year, so I know it isn't unknown.

> Binary size also matters, so the fact that you want to bring in a whole codec when it may not be wanted isn't a great selling point either.

That statement might have been over your head. Sorry about that.

1kb of code "on the hot path" is a shorthand way of describing the code that is executing in L1 and refers absolutely to binary size. My i7 only has 64kb of L1, and anything larger than that requires memory fetch and waits. Keeping the hot path within L1 is a good way to get 1000x speedups (and is a big part of why my code tends to be so fast).

Ulrich Drepper's "what every programmer should know about memory"[1] might be good introductory material for some of these concepts, and I highly recommend you read it.

[1]: futuretech.blinkenlights.nl/misc/cpumemory.pdf

> However I don't think you know enough about the domain to be able to sell it successfully.

Perhaps, and it is a small domain: The only people who think WebRTC is "too complicated" seem to be game developers.


> I can set up a netcode.io server and cause it to list any IP addresses I want as "server addresses", then cause a netcode.io client accept that challenge token and permit traffic to that address.

No you can't since netcode.io authenticates packets with a public/private key pair and a sequence id. If either the sequence id has been seen recently or key sig fails it won't accept the connection.

Netcode.io was built by someone who dealt with millions of clients and shipped many high volume games so this isn't anything new.

> If all you want to do is avoid head-of-line blocking on the player's network connection, then multiplexing several TCP connections (even websockets!) will provide the same latency and throughput guarantees that UDP will, getting correct NAT behaviour and resistance to D/DoS for free. Simply drop and reconnect any link on packet loss (trivial to detect on either side), and you only need enough connections to handle the maximum number of dropped packets you can tolerate within the time it takes to reconnect.

> I've seen at least one game on HN use this trick in the last year, so I know it isn't unknown.

Nope, nope and nope.

For one, packet drops tend to happen in bursts so your multiplexed TCP/IP clients just means you're resilient to N+1 drops. Detecting a packet drop is not trivial and you need to go through a whole syn/syn-ack/ack if you recreate. Dropped TCP/IP sockets also don't clean up immediately so you can easily exhaust your available socket resources with this technique.

I played the game that you mentioned and if you seen in the comments the netcode was not ideal. It was pretty obvious that there was some head-of-line blocking even with multiple sockets. Compare that to something like Subspace that ran on a single 250-400ms 56K connection seamlessly back in '99.

Look, I'm sure you're an expert in WebRTC but you don't understand the technical requirements for a realtime netcode. Rather than trying to tell us how we're 'wrong' try listening instead. Games have a pretty unique set of requirements which is exactly why you haven't seen off the shelf solutions like WebRTC gain any traction.


> No you can't since netcode.io authenticates packets with a public/private key pair and a sequence id. If either the sequence id has been seen recently or key sig fails it won't accept the connection.

How exactly do you think netcode.io knows that I own (or don't own) 151.101.193.67?

What exactly do you think prevents me from modifying my own netcode.io server at 151.196.182.212 from listing that IP address as one of the game servers?

It's a legitimate connection as far as the client is concerned, and that's what I have a few billion of.

Why do you think the public/private key or sequence ID has anything to do with this? Why do you think for a denial of service attack I care if the "server" 151.101.193.67 accepts the "connection" or not?

> Netcode.io was built by someone who dealt with millions of clients and shipped many high volume games so this isn't anything new.

If it helps you, I've built software with an install-base over a billion, in a place where high latency and bugs don't just mean a dissatisfied customer, but loss of real money.

However talking credentials at this point doesn't help me. It is probably best you respond specifically to what you think is wrong with my thinking instead of bringing up how smart you think your friends are.

> packet drops tend to happen in bursts so your multiplexed TCP/IP clients just means you're resilient to N+1 drops.

If you want to send datagrams at 20hz, and your RTT is 200ms, then 10 connections is equivalent in throughput and latency to UDP: Simply send datagrams down the channel. One side detect a stall? Tear down the channel and move to the next one. Even if you lose 10 packets in a row, the 50msec timer tick means you still recover at least one channel before the 200msec mark.

It's the same as UDP because it's the same number of packets. You can convince yourself of this with tcpdump, and subtract the TCP connect/teardown packets (which aren't blocking anything).

> Look, I'm sure you're an expert in WebRTC but you don't understand the technical requirements for a realtime netcode.

Accepted, at least for these discussions.

One of the principal authors of netcode[1] says that he doesn't understand WebRTC. He says it's too complex.

I have given several examples of things it does better than the published version of netcode if it were immediately adopted by browser vendors, and unless those specific things are addressed, you are going to have a hard time convincing Google and Mozilla to include netcode.io JavaScript bindings.

I appreciate you don't think network address translation is important, and I realise you don't understand the denial of service attack vector I'm describing yet. I think these things are important because I am experienced with their impact, and I know how WebRTC protects users from these things.

You can put your energy into understanding this attack vector, and what others might lay around also hidden from view, or your can put your energy into solving the problems that might make WebRTC a poor fit for game developers. In either case, you probably need to learn WebRTC.

[1]: https://gafferongames.com/post/why_cant_i_send_udp_packets_f...


> Yes. Please, someone, make gamedev in the browser a reality.

I think gamedev in the browser is a reality, but it requires two things:

1. Familiarity with Web/OpenGL.

2. Familiarity with well-written Javascript.

Those two camps are typically not the same crowd...

There's a transition period that comes from writing dom-based UI controls into WebGL- and this scares off a lot of solid js developers.

Similarly, there's a transition period that comes from writing C# or C++ into clean JS on the level that games require (e.g. where messy code has a huge impact on development speed and sanity). For example, I think well-written javascript looks more functional than object-oriented. That point could start a flame-war of its own, but we can all agree that the JS in the wild is often messy as hell.

This assumes that JS is the language the game is written in at its core... tbh I don't see WebAssembly replacing this part anytime soon... I see C->WebASM more for like helper utilities. For example, it would be great for a tool to generate geometry for line drawing, rather than manage the logic of when and where to draw the line due to mouse events.

As more people start to glue these sides together, I think we'll see some really cool stuff on the web. Maybe even as cool as the full-flash Away3D type sites we had 10 years ago :P


I've written a library for that purpose [1]. It's not ideal, but I guess it's as close as you can get to UDP in a browser without using extensions.

[1] https://github.com/seemk/WebUDP


Unity removed support for their web plugin over a year ago: https://docs.unity3d.com/Manual/Web.html

You can now build html/js that uses webgl directly: https://docs.unity3d.com/Manual/webgl.html


Aren't browser plugins considered a deprecated technology already? The latest versions of the major browsers won't even let you run Java applets AFAIK, everything is forced to go entirely web-native (compile to HTML+CSS+JavaScript/wasm) and I'm not sure whether that's good or bad so far...


> Unity also has the slight delivery hitch of needing the browser plugin to run games in the browser.

The plugin has been deprecated for quite a while now. Unity compiles to WebGL/Javascript.


> HTML is a pain for things that aren't documents.

Indeed it is but there seem to be no relevant alternative. The only thing that is better [for apps] is XAML (WPF) but it is Windows-only.


SVG + JS is really quite good. You will need a few abstractions to work comfortably with the SVG DOM, but you can easily write these as you need them.


Sounds reasonable. Why don't people use it then? Are there any frameworks already available so one won't have to invent a GUI toolkit from scratch?


One of the “few abstractions to work comfortably with the SVG DOM” that “you can easily write […] as you need them” would be a layout engine. You need one to adjust your UI to the user’s screen size.

Another, I think, is that, to add scroll bars to a view, you have to write SVG scroll bars, make them respond to the mouse scroll wheel, be accessible, etc.

Even if we will get them, eventually, it won’t be “easily”.

I think it is easier to use HTML for layout, and SVG (or canvas) for those views that need it. You can do that now.


>layout engine

Absolute positioning gets a bad rap! And yes, if you want widgets then you have a lot of work ahead of you. But it's a lot of fun to just work with the primitives - don't believe them when they tell you always need a GUI toolkit!


No, there are other XAML variants, like Avalon and Xamarin.Forms.

JavaFX also follows the same design ideas.

And there is QML as well.


Xamarin.Forms is cool but still doesn't work on Linux so it seems the same story as XAML to me - cool but of little use, unlike to HTML which is less cool but absolutely useful as it works everywhere. JavaFX - doesn't work on mobiles/tablets (there was a prototype in 2011, seemingly abandoned), sadly it seems to be about as dead as Flash and Silverlight now. As for QML - search results suggest it works everywhere but still hardly a popular choice for some reason, worth taking a closer look perhaps (but I still doubt it can compare to HTML as using Qt means producing binary builds for every platform to support AFAIK).


Avalonia (which I mistyped as Avalon) works in Linux, so XAML works on Linux.

JavaFX surely works on mobile.

http://gluonhq.com/

HTML is useless without a binary build of a running browser, that doesn't expose platform specific APIs.


Why not use something like Pug[0], which compiles to HTML?

[0] - https://pugjs.org/api/getting-started.html


If you need performance, and portability, and something that can be more easily maintained, for example QT is the only way. Power tools that are graphics heavy, or music applications, or anything that requires heavy multimedia lifting, you're not gonna do that currently in the browser.

Maybe this will change in the upcoming years though also, I have a feeling the client/server bridge will be getting closer and closer as we reach more performant ways of loading code from the servers but executing natively on the client/computer/handheld/phone whatever.

Also, WebGL compatibility is still far away from really doing any production apps (I mean, to reach a state where you can guarantee your app to work on at least most computers people have). Many platforms/GPUs/driver combinations just point out blank refuse to work, or the browser has blacklisted problematic combinations, as these can in reality cause crashing of the whole OS and computer, this has happened to me many times while developing WebGL applications.


Most of the time someone puts an WebGL demo here, it fails to work consistently across all my mobile devices.

Some of which don't have any issue running OpenGL ES 3.x games.


Yeah I'm actually afraid to run some of the more complex WebGL on my laptop, as I've had occurences of them hard locking my computer, which should not happen.


I agree, they are in the browser and they searched for the software in the browser. Why not deliver it in the browser too, if there is no good reason not to.


> I'd say "or a game," but many are webapps.

Not yet, the though I think this will be true eventually


Is this the cause or the effect?


> So the web browser is pretty much a mini operating system.

Was this not the tacit reason for Microsoft's burying of Netscape? A browser that is a portable platform for applications was a threat to its de-facto near-monopoly in operating systems for personal / end-user computing.


It's been a long time coming, of course, but when I saw vim[1] ported to Javascript it really hit home. There's no denying it, your web browser is now an OS. Not a mini OS, an actual honest to goodness OS multiple vendors add features to on a 6 week release cadence.

[1] https://github.com/coolwanglu/vim.js/


Just need to be able to run Chrome/Firefox directly in a Docker instance, and we’d be there.

But seriously, maybe that’s the next step in this crazy ride we’re all on.


We may even need something stronger than docker. There are various project to use virtualization technology to sandbox a browser session at the hardware level. My guess is that ultimately this is the future of web browser.



One commercially supported Windows [browser] application virtualization (-ish, pretty much Docker-grade security for end users) option, $35 for personal use: https://www.sandboxie.com

Most similar tools are marketed as antivirus products, and are only available via enterprise licensing and/or SaaS reporting. An example of a hardware virtualization based option: https://www.bromium.com/platform/our-technology.html


Or firejail


Obviously, where we are going is just exposing all the system resources (like file systems, GPUs, USB ports etc) to JavaScript/wasm APIs and replacing the whole GUI stack (X/whatevr, WM, etc) with just a browser. A browser is just a cross-OS compatibility layer now (and this is quite cool as you can now run considerably exotic OSes like Haiku or Solaris and use almost all the stuff other reople do). I have also heard ChromeOS is probably going to replace Android at some point.


I guess so far they are just a VM. Still needs to port Linux or some OS to WebAssembly to make it whole. Having the C++ compiler working is a very good start.


It'll happen. Webassembly as a Linux target architecture. It's a solution without a problem, but maybe web devs will find a way to use this to make their development easier.


Not WebAssembly but still fully Linux in browser- https://bellard.org/jslinux/


> we've pretty much come full circle?

If want to play old classic games like Heroes3, AoE2, Diablo 2 on Windows you have to install it using WINE.


Yes. I hope more people understand this: the browser is a Inner Platform- it's just a portability layer over OS, so that app developers can deliver the same thing to multiple OSes, without making the app portable.

These days, you can compile a Qt app with a web view with a single source for all three major platforms.


You can have that with any language that has a rich standard library (batteries included)

C and C++ are the exception as they were tied to nothing more that a basic POSIX like compatibility.


Not full circle until we can compile firefox or chrome to wasm :)


It does not run in the browser I am using.

That is one reason I chose it.


Impressive as it is, depressing that it enables C++ survival. C++ is a badly designed mess that gets ever more complicated and most probably the language itself is order(s) of magnitude complex than the problem you're trying to solve. Please enable other better designed comparable languages (like, D,Rust,Nim,etc..)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: