Hacker News new | comments | show | ask | jobs | submit login
Is WebAssembly the Return of Java Applets and Flash? (steveklabnik.com)
509 points by Vinnl 3 months ago | hide | past | web | favorite | 399 comments



The thing about Flash was that it was pretty WYSIWYG and a lot of creative people flocked to it because it had strong multimedia capabilities. Personally I kind of missed the part where you could build really nice looking stuff without much technical expertise, nowadays you need to know all about CSS, Polyfills and JS quirks so there's always this limitation where you have to understand pretty technical stuff early on to get something done.

Flash was more like using Premiere where you just edited your timeline with a bit of interactivity sprinkled over it, no movie editor ever had to get his hands dirty with some kind of scripting language or low level file formats just to edit a movie.

I had a lot of "oh wow" moments at the end of the 90's and beginning of the 00's with Flash. It was like the web was warped into the future. Nowadays you can achieve the same but not as elegant. It kind of reminds me of how PC's had to catch up with the Amiga for years. Perhaps starting with Wing Commander things were really on par or better, almost 8 years later.


To me, flash was like the past, or a warped vision of the distant future. The interfaces people created had visual appeal, but that’s all they had. It lacked all of the usability that I was used to finding on webpages, and therefore made for a very frustrating experience. No bookmarking, no back and forward, no right clicking to open a new tab, frequent superfluous sound, major security holes, long loading times. I would hardly call any of this elegant. As a user, I was thrilled when people stopped using flash.

Flash was like using poor quality native apps, which was a step backwards from a browser.


To me, JavaScript is like the past, or a warped vision of the distant future. The interfaces people created had visual appeal, but that’s all they had. It lacked all of the usability that I was used to finding on webpages, and therefore made for a very frustrating experience. No bookmarking, no back and forward, no right clicking to open a new tab, frequent superfluous sound, major security holes, long loading times. I would hardly call any of this elegant. As a user, I was thrilled when people stopped using JavaScript.


Link me to the most user-hostile ECMAScript-based web page you can find, and I'll open up devTools and delete my way out of whatever email-list/login/begware DOM-jail they try to keep me in.

Link me to the most user-hostile Flash-based web page you can find, and I'll watch a ten year old hour-long video presentation from a Gnash developer explaining their 100-year plan to ship a usable open Flash runtime alternative.


There are some bad JS websites, but most of them do have proper bookmarking since pretty much every web framework supports adding multiple pages including forward and back navigation. Browsers are also good enough to handle #links as well within a SPA app. Also, they tend to have faster than typical loading times for navigation within the site.

Flash didn't have any of that, and was much slower to load than modern JS websites.


Just no. I'm on latest chrome on android 6 and when I browse search results on YouTube, then click a video and hit back I expect that the scroll position within the results is restored. Instead when I hit back I only see the red top bar and a loading spinner and a moment later the results show up again and I'm at the top of them. Because we don't have nice paged results anymore, but this hip and ubercool dynamically extending page of results and everything.

And this is not some student's first approach at modern web technologies, its fucking YouTube from google. They can't get their shit working on their own browser. I have yet to see an SPA with a convincing UX that's more than just some wannabe webdev's "about" page.


> this is not some student's first approach at modern web technologies, its fucking YouTube from google

It's sad that Google is revered by the developer community as a role model for software engineering because their frontend web work has—at least for the last ~10+ years—always been terrible and a really great example of worst practice.

Gmail's early HTML version is possibly the last faded memory of a quality frontend product coming from Google. Everything they've created since the advent of GWT has been aggressively anti-user and anti-interop. Gmail took a long time to get full browser support and a longer time to play will with back buttons, etc.—meanwhile the Gmail interface has slowed and bloated with each iteration; the latest bordering on unusability on my very new midrange laptop. Wave never worked in anything but Chrome, the same is true of the early iterations of most of their large newly released products over the years. The Google homepage provides a totally inconsistent experience across browsers—on mobile I see three different results views in three different browsers, two are Blink-based! Why doesn't search by image work on mobile? We're thankfully no longer lumped with the distaster desktop experience the was Google Instant Search.

Similarly, Microsoft and Apple don't have great histories here. When looking for good development practices, you should always look at the example set by companies that need to compete. Monopolies don't need usability.


> Why doesn't search by image work on mobile?

This is a great example of Google's spotty frontend work -- especially since you can simply tap the share button on iOS, and then scroll over to "Request Desktop Site". And voila, image search now works perfectly. You can even click the camera icon to upload an image directly from your phone. And yet, this functionality is completely hidden by default on mobile.


Ironically google landed its initial success with (and maybe partly because of) an ultra minimalist website focused to the task. They seem to have forgotten that and now they have bloat everywhere. Instead of reducing it they put a lot of resources into developing technologies to deliver that bloat more efficiently.


> Instead of reducing it they put a lot of resources into developing technologies to deliver that bloat more efficiently.

Efficiency has a lot of different definitions depending on whom the efficiency is "for". I would say Google put a lot of resources into increasing the bloat (adding abstractions) in order to automate the creation and maintenance of their services. They are the pioneers of removing humans from the equation: they're using ML to create their maps, instead of the previous focus on humans driving around with cameras, they have a notorious lack of human-intervention customer service for most of their paid services, even GWT which I mentioned above removed human JavaScript devs (admittedly in order to allow Java devs to write it, but unlike other transpilers—e.g. Typescript/Coffeescript—which produce a relatively readable direct JS equivalents, GWT is heavily abstracted and the output isn't in any way representative of what a human would create).


Call me a conspiracy-theorist, or crazy. But I say that that is by deliberate design. Just like the push for HTTPS, on some level. You can't proxy most of the web, you can't cache, you can't bookmark SPA and ajax-loaded results, you can't scrape any of the data easily, you can't intercept and modify it if you wish without doing fancy SSL certificate injection.

"Gee, I remember seeing this unique comment on a youtube video. Let me search for it, maybe some web-crawler has indexed it and made it available to the rest of the web. Nope."

"Oh I remember the video, it was XYZ. I'll just go there and find the comment. Oh, it's not at the top of the comments, and there are 5600 comments."

"It's okay, I'll just ctrl-F for that specific word I distinctly remember was used in that comment. Nope, ctrl-f only finds what's loaded in the Dom, you've got to scroll!"

"Hmm, maybe I can just keep scrolling for a while till I find it. Scroll, wait. Scroll, stop, loading icon, scroll some more, wait, scroll some more, wait."

Hey, at least I can share a specific time-stamp location within the video on Google+ and Facebook!


On top of that, opening an email in a new tab from inside of GMail hasn’t worked for at least 2 years now, and it doesn’t look that they’re going to bring back support for this basic functionality any time soon. At least the new GoogleMaps version has gotten pretty acceptable in the last year or so, it’s almost as responsive as the classic one used to be.


Firefox Android doesn't suffer from this defect.


And then you have developers/companies/projects that end up treating JavaScript+HTML5 like "Flash but newer" or "Java but newer" or (for maximum pain) "Silverlight but newer". That sort of mentality - whether conscious or unconscious - underlies the vast majority of single-page apps in my observation/experience.


Javascript interfaces usually have much better usability than Flash ones used to have.

That is because following the standard semantics of the Web in Javascript is easier than breaking it, while for Flash it was the other way around. Of course that does not mean that every JS powered page is good, or that every Flash powered one was bad, and ironically it does mean that the more JS, the worse it tends to be, but the more Flash, the better it used to be.


Yet that doesn't prevent people from breaking it all the time.


You're absolutely right (and an excellently written post; very effectively communicated ), but I think the difference is in what's possible/easy/difficult.

Building a usable experience in Flash was possible, but sufficiently difficult that almost none did.

Building a usable experience in JS is significantly easier than it was in Flash, though sadly still seems to be sufficiently difficult that too few do. The ratio of usable JavaScript apps is a lot higher than it was for flash, but is still a minority. It is at least an improvement though, if an incremental one.


It's true that it is much easier now in JS, but that is because the browsers and the core web technologies have radically evolved in that 10+ years since Flash was the dominant way of creating interactive experiences. Looking back at the eco-system when Flash was still a viable technology, we have to recall where the browsers were at, where HTML was at and also JavaScript itself.

At the start of 2005, I was helping lead a team build a highly interactive experience for a major car company using Macromedia Flash and Flex 2. When we launched the site we had full cross-browser pixel accuracy, fully supported URL deep-linking, bookmarking, page history navigation, web crawling, keyboard navigation, screen reading support, interactive video, highly animated experiences and even had fully integrated web mapping using a beta version of Microsofts first interactive map tech (it eventually became branded as Bing maps).

What we built then was possible only because of Flash, there was no way we could have created the complete experience in JS/HTML/CSS. Granted, we could have done a lot of it in native web tech (and in some cases, we had to under the hood), but to make it fully pixel accurate cross-browser would be tripled the dev & testing time. On top of that, some of the features would have been impossible without Flash.

Let's recall where we were in 2005. Chrome was still 3 years away (it was released in 2008), Firefox was still version 1.0 (1.5 didn't release until Nov. of that year), Gmail was just released into beta (you had to have a friend to get access), Google Maps was still in an experimental beta and was truly pushing the boundaries of what JS/HTML could do. The browser history API didn't exist so we had to do crazy iFrame hacks to create deep-links and history (this was true of any app, no matter what tech). There was no video in browsers without a plugin (HTML5 spec wasn't finalized until Oct. 2015). There was no such thing as CSS animations, they got their first release in Firefox 5 (June 2011).

So, were Flash apps trash? Absolutely, but not all of them. That's like claiming that modern browsers solve it all. We still have massive load times (look at how much time and effort is put into optimizing content delivery), the cross-browser and backward compatibility is a nightmare, and way more of a challenge then it ever has been. In some ways, modern web development is much better than Flash development was, but to be perfectly frank, we have a LONG way to go. Honestly, we haven't caught up to where Flash was 14 years ago.

But, we are getting there for sure. I can see why Steve Klabnik is so excited about WASM. I can envision where it is going and it reminds me of where Flash was trying to go before it was slaughtered by Adobe. It's an exciting time for sure, but we should also look back at where we came from and instead of just stating Flash is trash and it caused the web to be terrible, we should also look at what it did right and what it allowed us to create before we throw it out with the bathwater.


Nice attempt at memeing a response, but that comment doesn't work. Most SPAs that I've seen allow bookmarking, back and forward, right clicking to open a new tab... and they certainly don't introduce major security holes to attack your local machine (they're just websites, not a poorly coded third-party plugin like Flash), and they do not typically have superfluous sound or long loading times. In fact, browsers are blocking superfluous sound more and more, which wouldn't be possible to affect with flash since the browsers didn't control flash.

So... nice one there, green user. You really got'em!


flash used to make websites was garbage, but used to make games that run in the browser it was very nice.


Exactly this. Flash for websites was trash.

But Flash games were a boon. Everyone with an idea was able to make it into a game. I spent a lot of time playing games in Kongregate and similar sites. Many were really interesting, well worth the effort of sifting through the rip-offs and trash.


I used to love Flash games, and I was also watching the projects that tried to get .swf files to run in a Flash virtual machine emulated via something like Emscripten, but I feel like all those projects have died out :/


And Flash for animation: e.g. this was produced and published in Flash and was surely smaller to download than the youtube video of it:

https://www.youtube.com/watch?v=LeKX2bNP7QM

The internet speeds were much slower then, the difference mattered a lot. Also note, from the description:

"Sadly, it's not possible to replicate the pop-up mouse-overs from the "Special Edition" - though I was able to append the "deleted scene" presented in that version. If you want to see that version - or the "Fire BAD!" flash video game where you attempt to extinguish flames on James Hetfield (tragedy + time = humor), you'll have to find the original .SWF files and have a way to play them."


In that regard, looks like Unity is the new Flash.


It is and it isn't. Unity is great but it's got like a min 5MB download for just a spinning cube. Flash (because the plugin was built in) started instantly and had built in support for streaming. Unity can stream, after the 5MB engine download, but for whatever reason it's not common.


I meant more as in people with no technical knowledge whatsoever creating interactive stuff. Just look at the cambrian explosion of indie games from (most) people who probably can't do long division by hand.


Very likely! But some years ago, back when I still played Flash browser games, Unity games didn't work in a browser for Linux, the only platform I use. Maybe that situation has changed? :)


What with a WebGL target for Unity these days, it has. The Unity plugin was deprecated versions ago and they don't even ship a target for it anymore.


It seems the trend was that it was nice for makers, not so much for users.


Which, ironically, is the case with modern Electron apps and bloated JavaScript UI frameworks as well.


I'm a user of several Electron apps and they are very nice for me too.


Electron (or embedded Chromium, etc.) is lazy development: save time on cross-platform work at the expense of your users' CPU overhead/battery time/SSD life.

Spend time watching some of each year's WWDC talks and noting how much effort Apple engineers give to fine-tune certain APIs to enable app developers to optimize battery use, disk access, or graphics rendering. State-of the art stuff. And then your Electron app just ignores all of that.


I think calling it "lazy" is uncharitable, unless you're also going to chastise developers who target only one platform -- they're likewise not doing the cross-platform work.

Manually targeting multiple platforms is really hard. How could one person possibly keep track of all the battery-saving API changes in OSX, iOS, Windows, Linux, and Android?

Yeah, it's annoying that Electron apps are so bloated. But the alternative would be > 50% of people can't use those apps at all because you only targeted one platform.


"unless you're also going to chastise developers who target only one platform"

Which I tend to do, particularly since that "one platform" is almost always either Windows or macOS, neither of which I use on a daily basis (and neither of which I have any desire to use on a daily basis).

Meanwhile, there's such a thing as cross-platform GUI applications that don't try to fit an entire web browser into them. Especially if your programming language of choice doesn't require precompilation, frameworks/toolkits like Qt and GTK and Tk are perfectly viable for cross-platform development, at least on the desktop (which is usually where folks are using the likes of Electron or CEF anyway). They also tend to perform significantly better and stay much closer to the look-and-feel of the rest of the operating system (no, I don't care if you think you know better than me about my sense of style; if your app doesn't respect the look-and-feel of the rest of my system, then it sticks out like mold on a slice of Wonder Bread).


Do you know of any of those cross plaform GUI tools that work in the web browser? Targeting Windows/Mac/Linux is easy but outside of JS I have seen no way that you can write one application that works on Web, Windows/Mac/Linux, Android and iOS


I know Qt supports both Android and iOS (and a bunch of other mobile platforms, apparently, like Tizen and Blackberry).

I don't think any of them target HTML/JS, though.


Because it's not like there aren't any other options for multiplatform development...


> How could one person possibly keep track of all the battery-saving API changes in OSX, iOS, Windows, Linux, and Android?

You don't. You just use system-provided APIs and get the improvements for free whenever the system frameworks are updated to be more efficient.


And then your Mac App ignores all of the people not on Mac.

I'm not saying everything should be Electron, but it certainly has usecases just like native does.


Doesn't have to be a Mac only app. But be considerate to your users and use the best available SDKs for each platform, not one size fits all.


If resources were infinite and there were never any trade-offs, most people would agree with you.

But what's your game plan under other circumstances? Do nothing?


If you're a small-time indie dev, do what works so that you can ship. If you're a well-funded co. like Spotify, GitHub, or Slack, however...


You're making the mistake of thinking that everything is just a technical challenge that you can throw money at, and that technical superiority is the only trade-off.

Just because an enterprise has funding doesn't mean it's wealthy in other, more scarce things like organizational capital. In fact, I wouldn't be surprised if all the client developers at Slack would love to split off into dedicated platform teams and do a rewrite. We as developers love that shit.

But you should try to understand why these enterprises make these trade-offs and why they are still in these positions in spite of how much disposable money you think they have.


Sorry but I shouldn't have to use 2GB of RAM to browse the web. Compare Airbnb to Craigslist. One is visually pretty, the other isn't. One uses every framework under the sun, the other is HTML.


“Nice for you” and “nice for your computer” are being treated as separate things here, I think. Assuming a supercomputer (like the kinds most Electron app devs must surely have), Electron [native] apps have a good user experience.

Electron wrappers for originally web apps, on the other hand, are usually horrible user experiences—often they don’t even support right-clicking elements, or clicking on links to open them in a browser.


> Electron [native] apps have a good user experience.

No, they don't. Ever tried to drag and drop into an Electron app? How about used VoiceOver?


Relatedly, I played quite a few flash games, and they were nice for me.


This is a very broad statement. Would you care to elaborate?


"slack"


"spotify desktop"


Wasn't Spotify originally written in C++?


Even if it was, it isn't now.


It probably still is, since Chromium Embedded Framework (which is what Spotify uses, last I checked) is used via a C/C++ API.


For some cases, nice for the makers is nice for the users.

I don't think more than a handful of the games and little videos that I enjoyed as a kid on newgrounds and addicting games would have been made had they not had such a low barrier to creation.

I do kernel/firmware development now, so I'll be the first to admit that this value proposition isn't valid for all domains. But sometimes, for some use cases, just having a simple to use scripting environment is totally the way to go. Particularly if it's sandboxed well and can't really harm the end user.


Legions of bored teenagers would fight you over that. :) At one time there was no force so powerful as a bunch of friends sharing albinoblacksheep games/videos. Users loved it.


A pretty amazing collection of games was at friv.com


I only got into CS because of Flash. First I spent a ridiculous amount of hours on games, then I learnt to reverse engineer and tweak them, and then a ridiculous amount of hours creating and debugging games/apps with Flash.

Flash for the win! Till the day I die.

AS3 was amazing. When Typescript came out, it was a lot like AS3. I was convinced and became a believer in TS and never looked back.


holy crap, i've never seen this before. it's still up


It was also quite nice for users, assuming the intentions of the makers were good. Ads and security exploits were the truly nasty bits.


Not just games; anything other than hypertexts and forms.

In those days there was no notion of SPA or other contemporary well-established patterns for online interaction.

I authored several commercial sites using Shockwave and the like and it enabled us to give clients the ability to author media-rich and design-heavy presentations in a fashion they were familiar with from card decks applications. At the time doing so in a 'sexy' way made them stand out.

It was a cul de sac, but it was a worthwhile direction to test in terms of UX.

That it was proprietary was weighted differently in those days. The world was dominated by closed standards and open source was still vestigial especially in commercial applications.


How did you reflow the UI when users resized their browser?


You could communicate between flash and the browser:

https://help.adobe.com/en_US/FlashPlatform/reference/actions...


Flash websites could have a back button. Maybe it wasn't quite as easy as putting in a hyperlink, but it was still pretty easy to configure.

Not arguing with you about the rest. As a web developer I still haven't made a website in the last 5 years that was as visually impressive as my Flash projects from before, and sometimes I miss the visuals....

But people forget that the AWESOME intro that you wait two minutes of loading to see... is only cool the first couple times. Many many people where losing 5 minutes of loading time, just to visit websites that they used EVERY day.


I agree but it's about using the right tool at the right time. How else would could they have started YouTube back then for example? Early YouTube definitely was a glimpse into the future.


It can be argued that the existence of flash delayed the creation of proper video support in browsers.


Not sure what you mean by 'proper' - but flash video was a step-up from the previous era where websites gave you a choice of either RealVideo or Windows Media formats - the more quirky sites had Quicktime, and you had to have plugins for each [I'm dating myself here].

By virtue of being installed on 98% of all computers, Flash video was a de facto standard video target. One that could be played on Linux without getting into legal grey areas too!

Without Flash looming over them, IE would never have supported cross-platform video - this would have been Microsoft undermining the WMP format.


It was also a useful proof of concept of what could be done. Without Flash, would there have been the critical mass of users who wanted video in their browsers at all? Or the content?

I think that we would have gotten there eventually, but probably not as fast. The close - but not quite good enough - implementation of video in Flash was as much of a catalyst as it was a crutch to lean upon.


Bullshit. Without Flash or some other plugin there'd at best have been a different standard for each browser, with the only one worth caring about being whatever IE implemented.


And pretty much all of the downsides you mention apply to these newfangled WASM+Canvas apps


Flash Professional still exists (renamed Animate CC) and outputs to Canvas, WebGL, and Flash/AIR. https://www.adobe.com/products/animate.html

I agree with you that it's a little sad that amateur animation fell a bit out of vogue, but the tools are right there for anyone whos interested to pick up.


> I agree with you that it's a little sad that amateur animation fell a bit out of vogue, but the tools are right there for anyone whos interested to pick up.

I have memories of being wowed by what creative people could do with flash in the 90s and I would not call that amateur animation. I think the problem (1) is we currently seem to be lacking in tools that let people skilled in the visual arts create things without knowledge of the mechanics, so to speak. It's like if in order to write a great song someone would first have to understand how a musical instrument is built. I would not call Amanda Palmer or Lee Ranaldo amateur yet I bet they probably can't build their own instruments.

(1) I must make the disclaimer that for a couple of decades already I've been working only on the backend, so I may surely be missing part of the picture here.

Edit: Fixed footnote mark.


amateur as in non professional/commercial. people not in for profit trades.


Amateur meaning lover. Someone who does something for the love of it and not the money.


Makes sense then. My wrong for forgetting that definition of the word when reading your comment.


To be fair, there were reasons those things fell out of favor; there was a real tendency for crazy, uneditable code that was way larger than it needed to be. Perhaps some of these things would be less of an issue now (honestly, a bunch of wasteful code is less of an issue now than when most of us were on 56k modems), but they weren't made-up.


Absolutely, my first link was 2400 so I know exactly what you mean :)

And I know there are still issues. Given today's speed, issues are more about security and accessibility.

My point was just that the artists using flash to create good content were not amateur animators.

Edit: My point was wrong as it was based on a misunderstanding of the use of the word 'amateur' by the person I was replying to.


The only good content in Flash was games and animations (like Homestar Runner). If you were using Flash as a Web/application design tool, you weren't making good content. You were making a pain in my ass.


Still waiting here for ADP and weather.gov to get their shit together and enter the 21st century.


What Flash content does weather.gov have? Their website visually appears a bit outdated, but it works just fine.


The looping radar views use flash [1] [2]. The composite regional radar loops don't though [3]. That is unfortunate because the former has more information and detail. There's no reason why it can't be reimplemented with canvas or SVG to manage the overlays.

[1] https://radar.weather.gov/radar.php?rid=buf&product=N0R&over...

[2] https://radar.weather.gov/radar.php?rid=BUF&product=N0R&over...

[3] https://radar.weather.gov/Conus/northeast_loop.php


I suspect a lot of early internet amateur animators got started on pirated/shared versions of Flash. Probably helped the proliferation of that scene a whole lot. Cloud based service models make that much harder.


$20.99 for all of creative suite is also a much lower entry point price.


Things may have changed for today’s teens, but having been a teen through the 00’s that would’ve been too expensive for me. It was hard to even justify a WoW subscription at $15/month even with the near endless entertainment value that provided. Also, with the CC subscription setup, if you stop paying you lose access to your creations. That’s not too big of a deal for working adults, but for a teen with tumultuous cash flow, it’s a huge dealbreaker.

I spent hundreds of hours in Photoshop as a teen, but I likely would’ve have bothered with it at all if it weren’t easily piratable.


12-14 year olds aren't going to be dropping 250 bucks a year. If you look at something like newgrounds, a lot of that was bored teenagers.


Depends on where you live in this World. For young people here (south america) that could be a lot.

That's a problem generally with cloud services. Pricing is mostly done considering some countries and contexts, but totally ignoring other. With physical equivalents some years ago, local distributors made deals with their home offices to adjust for this, but that's not something usually done anymore.


In 2007 flash could run 3d games, do shaders, have multiplayer, run physics manipulation, have 3d sound, do bitmap manipulation, socket programming, and had documentation built into the editor.

Even now html/J's can't do all of the things and most of the things that you can do, are not as fast. While browsers are stuck with legacies to uphold. Flash had no dom to worry about, untyped language (as3) or had css holding it back.

General argument was flash sucks because people make terrible content with it. Which is like saying I hate having hands because I trip things over.. so no limbs = no mess PERFECT.

In turn I think it helped push native apps. Since plain Js/html app just sucked in comparison when it comes to experience and capabilities.

Flash should have been open sourced. Hopefully with webgl and web assembly someone can step in and create something similar


I think the key thing to keep in mind is that Flash gave you all of that from one vendor in a coherent, easy to make use of experience. We can certainly do the same things with JS and the technologies we have outside Flash, but it takes a lot of mental work to stitch it all together. You have to know so many frameworks. You have to know the type of frameworks you're looking for to do `x`. And then there's a lot of performance tuning because something done through canvas isn't as fast as something done with the DOM, etc. Flash was gross but useful.


I agree. In the early 2000s, I learned about animation through Flash. I remember working on a 9th grade project that I illustrated through Flash. I remember learning what frames were, what keyframes were, what vectors were, etc. It was just the right amount of non-technical for me to make sense of it.

Taking a step back, man what a different world it was back then. I'd fire up MS Frontpage or Macromedia Dreamweaver and go to town. The expectations have changed on the maintainability, usability, and functionality parts so I understand why we are where we are today. Both I do miss those simple days.


Flash was much more than animation. ActionScript was a fantastic language for developing dynamic client guis in. Flex's component based model was way better than what we have now -- basically typed webcomponents with a consistent runtime, and mxmlc and mxml was open source. Actionscript 3 was a fine language, with types and a nice compiler to work with.

Steve Jobs and browser security holes killed Flash, and no current open web platform covers all the use cases mxml and AS3 covered for cross-browser development. I could analyze audio channels, run lightweight process concurrency via green-threading, store user files, do i81n translation, streaming websockets and work with actual binary data types in the browser in 2006. I could trigger actions based on events in video and audio streams. I had consistently applied css with animations across components in 2006. I had reusable web components in 206. It's now 2018 and we still don't have cross-browser support for all of that. Oh, and I could run my app on the desktop in offline mode and in the browser.

Security was an issue. Looking back now, I think an Android-like permissions scheme is what it, and the browser, needs to fulfill the promise of write once run anywhere that the browser and the web tends to make.


>Actionscript 3 was a fine language

It was heavily based off ES5, which really helped launch my Javascript abilities forward at the time. I was sad to see it go and ended up working with other technology that never felt as fun.

I remembered recently that Haxe was initially based off of MTASC (something I used in the later AS3 days), and checked it out. It's quite a stable ecosystem that feels very familiar in syntax. Add in HaxeFlixel, and it's almost like Flash never left.


Your timing is off, ActionScript 3 was based off the doomed ES4 which had classes etc. Adobe was even a main participent in the standardization process iirc.


Yup, AS3 was based completely on the ECMA draft at the time and was a spec implementation. Macromedia then later Adobe had representatives on the W3C board and due to politics, worry about compilation, the lack of "learning to code from reading source" and from what I recall concern for backward compatibility the draft was killed. Harmony was the next draft and it eventually evolved into ES5.


Completely agree with you... Going to browsers only from flash was like going 20 years backward. Even features available don't perform at same speed as flash. Sad disaster


i once (~15 years ago) worked at a web design shop where the founders were both architects. they did everything in flash as it somewhat resembled the tools they knew - cad software. one of them was the designer and he was really good and they made amazing websites without even knowing a shred of html or even programming languages (they didn't even use actionscript). in case the customer wanted a guestbook they used a free ad-supported one (until i joined as a programmer).

flash was a good tool for the websites they used to create, usually graphics and animation heavy websites low on interactivity. they used the export function to generate the swf including the html index page to embed it.

over the years focus shifted more and more to dynamic websites with content generated from databases and they were mostly lost there. dynamic content (loaded by http requests from databases) in flash usually turned into a huge pain in the ass after a while. for those projects we switched to a traditional website model where dynamic content mostly wasn't loaded into flash, instead it was a html-by-php website where flash animations replaced header jpegs (i.e. animated passive content).

so, in our case, flash was a good replacement for animated and slightly interactive but not dynamically generated content.


I had a lot of "oh wow" moments at the end of the 90's and beginning of the 00's with Flash.

A lot of users did, too. But it was usually along the lines of "Oh, wow. This page has flash. Well, I guess I'll go get a Coke while the Flash plugin loads into the browser and my computer can't go anything else. If I'm lucky, the whole thing won't crash and take all my work with it by the time I get back."

We romanticize the past.


You know what was worse than that? Embedded RealPlayer.


I get enough "buffering..." spinners these days that RealPlayer jokes are in danger of being re-evaluated.


I never had those issues with Flash. Just low framerates and long loading spinners for the bigger animations.

Java applets OTOH had the exact experience you describe. Those were absolutely terrible.


Maybe they just wanted to play with splendid stuff like http://wordperhect.e-2.org


Macromedia Flash keyframe animation was great even for technical users who didn't know animation. However, as soon as you wanted any kind of interactivity you had to start learning a new scripting language and that was painful for everyone.


> Macromedia Flash keyframe animation was great even for technical users who didn't know animation. However, as soon as you wanted any kind of interactivity you had to start learning a new scripting language and that was painful for everyone.

Not really. Action Script has always been close to Javascript/JScript.Net and now Typescript, it is the same syntax. In fact ActionScript 3 was supposed to be the template for ECMASCRIPT 4, before it was abandoned.


Totally agree with this! I miss the simple days of build once run anywhere and just being able to bash out a fun idea in an afternoon and release it and know everyone would have the same experience. Yeah security was crap with flash but surely that could have been solved. I think the battery use and apple’s desire to make sure everything had to go through their paid AppStore was what killed it really though.


I wish adobe would make a flash-like interface that output html5 / JavaScript / css


Their current Animate product (they renamed Flash Pro) is pretty much what you're looking for, if I'm not mistaken.


When Steve Jobs and company effectively killed "RIAs" I remember Adobe pivoting and promising a software to do what they previously had but output HTML5/CSS/JSS instead of Flash. It looks like it is still alive in the form of Adobe Animate?


The product was originally called Adobe Edge, and it was rolled into Adobe Animate.


> The product was originally called Adobe Edge, and it was rolled into Adobe Animate.

Not Adobe Edge was killed dead, it was not rolled into Adobe Animate. Edge did export animation with jQuery and DOM element, while Adobe Animate exports animations in pure canvas.


Have you tried Tumult Hype/Hype Pro (for Mac)? It’s not as fully-featured as Flash, but it’s similar in many ways and exports to HTML/CSS/JS.

https://tumult.com/hype/pro/


Well look at that. Be great if they changed their pricing model. A possible 2 weeks down the drain seems a bit staggering.


Kind of makes me wonder if there's a market for a React.js editor that works a lot like the Web Inspector does right now, where you can build your hierarchy as a nested list, set properties of each layer through a table, "wire" properties through several layers of hierarchy to reach further down components, and reference functions easily in the table. As long as it has support for lifecycle methods, it feels like it could become a natural UI for writing React apps!


The time is ripe for someone to recreate something like Flash the builds to WASM


Adobe has Animate CC, which is basically Flash Pro with a new name that also outputs to JS, canvas, and webgl. I think it's very probable it will implement WASM in the future.


Yes; as I said in the post, I focused mostly on implementor's needs here. When I eventually do a user comparison, this is a huge pro for Flash.


Flash was indeed a tool for cultural creatives. You can tell because it only ever worked properly under Internet Explorer for Mac OS Classic. On every other browser/platform combo, there were framerate issues and the audio would gradually desync from the video. Forget about seeing these issues resolved in the afterthought of a Linux port.


Hehe yes I remember putting silent audio loops in Flash animations so the FPS wouldn't drift too much.


Well, you could run flash in webassembly and get all that back.

But it would still come with many of the drawbacks.


https://github.com/mozilla/shumway

is supposed to be an implementation of the Flash VM in typescript, But it can't even run in latest Firefox browser anymore apparently and no commits from 2 years.


Any reason why we cannot build a WYSIWYG editor as a high level WebAssembly language?


Flash is an application. WebAssembly is a compiler. Not the same thing.


> If you built an applet in one of these technologies, you didn’t really build a web application. You had a web page with a chunk cut out of it, and your applet worked within that frame. You lost all of the benefits of other web technologies; you lost HTML, you lost CSS, you lost the accessibility built into the web.

But that's also true of an application which relies on WebAssembly (or JavaScript): it loses all the benefits of the web, because in a very real sense it's no longer a web site, but is instead a program running in a web page.

WebAssembly or JavaScript, neither is document-oriented; neither is linkable; neither is cacheable. It's Flash, all over again — except at least with Flash one could disable it and sites were okay. with WebAssembly & JavaScript, every site uses them for everything, meaning we get to choose between allowing a site to execute code on our CPUs, or seeing naught but a 'This page requires JavaScript' notice.

It is the return of Flash, and that's a bad thing. We thought we'd won the war, but really we just won a battle.


I envision horrible "all WASM" websites, just like the old "all Flash" websites, that won't have accessibility, won't be able to be linked to, etc. Worse, I envision this as being another step in the ad blocker arms race. Inevitably there are going to be websites that package an entire WASM-based browser that will need to be used to access the site, nullifying client-side ad and script blockers. I can see the pitch now-- "Keep your existing website but add our tools to prevent ad blockers!"

(Edit: Typos. I should know better than to post from my phone by now. Grrr...)


This is a criticism that would be more suited to the Canvas API than the WASM API. WASM is still meant to drive the DOM API which is still as introspectable as before.

[EDIT]: Steve is right of course, and I misspoke here, "WASM is still able to drive the DOM" is closer to what I meant to say.


I agree with your first sentence, but not your second: wasm is meant to access all platform APIs, not just the DOM ones. Canvas is part of the platform as well.


I think we will start to see a lot of all-in-one frameworks that use wasm for constraint based layouts so people don't have to learn CSS. I hope I'm wrong but I can definitely imagine something like this coming from the enterprise java/.net types.


I sure hope we do. CSS has had 20 years and is still the most error-prone way of doing layout I've ever seen.


I don't think so. Accessibility, links, ad blocking etc. behave exactly the same with wasm as with JS.

What difference do you see?


no if you don't target DOM, if i could have "browser" in a browser that targets canvas or webgl then i cannot block it, only on network level.


You won't be able to block it on a network level either when it's running on a locked-down platform like iOS and using an eventual iteration of TLS that prevents man-in-the-middle inspection. This feels like yet one more step in the direction of "you don't own your computer anymore".


except iOS has some of the best ad-blocking available, with OS level extensions that are almost impossible to get around. So your point doesn't really make a ton of sense.


The OS-level tools won't be able to inspect the websockets-based channel that the browser-in-browser uses to communicate with its back-end. It will all be opaque TLS-encrypted traffic to the OS. The native browser will be hosting a canvas element that will host the UI and running WASM code and that'll be all the OS will see.


i was more thinking on DNS level, but with DNS encryption even that falls on it's face.


DNS encryption is done by the OS, which you control, so you could still null route ad servers.


You can use canvas or webgl from JavaScript too, so there's no change here specific to wasm.


Except JS alone wasn't good enough for that; WASM, especially as a compilation target, seems to make the task of embedding a browser within a webpage easier.


you can but, wasm should be an order of magnitude faster then javascript, this makes it possible to run all kinds of "heavy" apps in the browser. some kind of client that outputs to webgl in this case shoud be doable, in pure js it would be to slow /expensive.


Documents are documents.

Apps are apps.

Sometimes both are in the browser.

It'd be great if all "documents" had an HTML version, with minimal JS. For accessibility, searching, deep linking, etc.


like Flipboard created? React Canvas rendered the site directly to canvas.

https://engineering.flipboard.com/2015/02/mobile-web


How the hell do you retain any accessibility when rendering a custom UI in Canvas?


you dont. they reinvented their own css and dom.


There are sites where accessibility isn't much of a concern (esp things like games) or where it would be easily handled (eg. an image aggregator). For the rest, it seems inevitable that another (probably not-quite-compatible) accessibility layer gets built on top (for example, using the Qt accessibility model when compiling Qt into a canvas).

Overall though, I think wasm shouldn't be replacing HTML/JS.


It already is possible, with JavaScript. WASM doesn't change anything. And the fact that although it happens with JavaScript, it isn't pervasive, I think should assuage this fear.


I agree WASM doesn't bring in anything fundamental to this picture that isn't already there with JS. But that is no comfort.

In the ancient web world, the site author wrote HTML to describe the data she wanted presented and the browser took care of making it accessible. But authors (especially companies) wanted detailed control of how their sites looked, so they turned to flash etc.

JS has long been re-playing this trend in slow motion -- moving away from web pages being interactive documents presented by the GUI app called browser and towards them being stand-alone GUIs like in flash.


SEO is too big of a concern nowadays for a resurgence of black-box websites, specially if they depend on large audiences and ad revenue.


But you can still get social media traffic. I think that it's very possible that will help allow black box web sites to return.


I suspect some appeal of Electron apps is that the user can't block ads or scripts running in what's basically a website.


To be frank, I'm surprised Wildvine hasn't been used in conjunction with DoubleClick/GoogleAds to enforce websites in showing adverts.

Sure, there's "fuckAdblock" but that shortly spawned "FuckFuckAdblock". It's a whole different case when the very browser prevents the content from being tampered with.


My position on this is basically, WebAssembly is no different than JavaScript here. If you think JavaScript ruins this property, well, the web was only in the form you describe for four years, and has existed this way for 23 years now.


The focus on driving WASM performance in the browser platforms, combined with the ability to transpile more languages to WASM, pushes the barrier-to-entry lower. Yes, these concerns aren't specific to WASM, but the platform is being made more capable of hosting this kind of troubling code, and more attractive to developers who would develop these things.


Compiling C++ or Rust to use in a web page is much more complicated than just writing Javascript. I can't see how that lowers the barrier to entry. Your argument seems to be that you can do more with those platforms because they're more performant, which yeah, to a point I guess, but Javascript is already plenty fast for making whatever obnoxious dreams people want to come true and the web seems to have survived it fine.


If you compile C++ or Rust to native targets anyway, I fail to see how it's more complicated.


I don't think the barrier to entry or ease of development is really the issue when we're talking about ad networks.


Ad networks, no-- I agree with that. I'm more concerned about entire websites becoming "apps", complete with browser-in-a-browser functionality (with the inner browser's behavior being completely under the control of the site operator).

It would be an interesting experiment to transpile a less complex browser, like Arachne, over to WASM as a proof of concept to demonstrate how awful this kind of future would be. (Yet another "if I had some free time" wishes... >sigh<)


> It would be an interesting experiment to transpile a less complex browser, like Arachne, over to WASM as a proof of concept to demonstrate how awful this kind of future would be. (Yet another "if I had some free time" wishes... >sigh<)

Don't. Most people will ignore the demonstration, but someone greedy will fork the project, build a library out of it, and start selling as a product to ad networks and media companies.


I agree. Somebody is going to open that Pandora's box, though. I'm glad to see that I'm not the only person who is concerned. I think it's an eventuality, however. Few young developers today have had to deal with walled gardens and don't understand how bad they are. Worse, today's platforms give an unprecedented amount of control to the platform owner to the detriment of the hardware's actual owner, and developers seem more than willing to help create those mechanisms of control. What's going to happen when nobody is left who actually owns their own computer?


Yup. That's what I'm worried about.

And people growing up with today web-first, mobile-first computing model have no clue of the power and capabilities computers have. With data being owned and hidden by apps/webapps, limited interoperability, nonexistent shortcuts, little to no means of automation of tasks, people won't even be able to conceive new ways to use their machines, because the tools for that aren't available.


You just gave me a horrible vision of a robotic hand perched over a smartphone screen being programmed to touch the screen to "automate" tasks because nobody will know any better. (Of course that would never work because our smartphones have front-facing cameras and software to detect faces and verify that we're alive... >sigh<)


Yeah, this is the input equivalent of the analog loophole :).

Now ordinarily, on PCs, you do that by means of simulated keypresses and mouseclicks, using scripting capabilities of the OS or end-user software like AutoHotkey. In the web/mobile-first, corporate-sandboxed reality, I can't imagine this capability being available, so Arduino and robot hand it is.

(But yeah, bastards will eventually put a front-facing depth-sensing camera, constantly verifying the user, arguing that it's for "security" reasons.)


Ad networks are the next Macromedia.


That is certainly not true. The knowledge base you need to even start compiling to WASM is far greater than just JS.


I think it's reasonable to assume that there will be many efforts to create tools and libraries that make it easier. It will become less difficult with each passing day.


The web long ago became not only a document store but also a thin client platform for distributing full client applications to end users. That cat is out of the bag and is not going to be stuffed back in.

WASM is really just a cleaner, faster, more elegant way of running alternative languages to JavaScript in the browser. It replaces transpilers that turned languages like Java or Go into ugly basically machine code JavaScript blobs. It will save bandwidth and improve performance but otherwise doesn't change much. Note that transpiled and uglified JavaScript is already "closed source," so nothing changes there. Anything can be obfuscated.


I do see your point!

I am however scared that HTML will go the way of Gopher. Why would anyone care to maintain boring hypertext documents when we can have app of the day. Marketing departments everywhere tend to turn the web into Blinkenlights.

How many support documents of more than 15-20 years ago are you able to still find using the old links? So many sites are working as dumb front-ends for a database.

The information retrieval and persistence over time is not something many worries about.

The cat is for sure out of the bag. I just hope what was still can survive.


>I am however scared that HTML will go the way of Gopher. Why would anyone care to maintain boring hypertext documents when we can have app of the day.

JS or Wasm can't create documents by themselves, they still need a DOM. Even if it's a 2D canvas or some WebGL canvas, it's still a DOM element. Or even if it's just an iframe that loads some blob, on the top level it's still a DOM element. And as such it can be inspected and controlled.


> And as such it can be inspected and controlled

Not if the content is decrypted by EME that's not fully controlled by the browser.


I think marketing departments would quickly notice that most crawlers won't execute all the fancy Blinkenlights.

I would assume that it will take a while for tooling in any other language to get to a javascript level. I think WASM will mainly be support for the latter. Do some excessive calculations.... and yeah, excessive Blinkenlights.


You'd be surprised, marketing departments generally do not have a clue about that specific type of thing. Hell eBay's operations apparently doesn't from my experience. It's incredibly easy to game marketing, and internet marketing is mindlessly easy without the invasive stalking.


> The cat is for sure out of the bag. I just hope what was still can survive.

I hope so too, but as a member of predatory and territorial species, the cat will most likely keep on killing everything else around it.


Yes, but only for about four hours a day, because naps.


Exactly. Well said.


Webassembly is linkable: https://webassembly.org/docs/dynamic-linking/ in the dynamic linking sense.

WebAssembly enables load-time and run-time (dlopen) dynamic linking in the MVP by having multiple instantiated modules share functions, linear memories, tables and constants using module imports and exports. In particular, since all (non-local) state that a module can access can be imported and exported and thus shared between separate modules’ instances, toolchains have the building blocks to implement dynamic loaders.

The code is fetched via URLs so you can link to it in that sense, too.

It's also cacheable: https://developer.mozilla.org/en-US/docs/WebAssembly/Caching...


I believe the parent comment was referring to hyperlinks, not dynamic linking.

The point was more that once webpages become applications running on the client (think single page apps), the natural document metaphor of web pages and the tooling built on it (hyperlinks, forward/back, bookmarks, history) falls apart unless you do extra work to ensure that experience is maintained.


the natural document metaphor of web pages and the tooling built on it (hyperlinks, forward/back, bookmarks, history) falls apart unless you do extra work to ensure that experience is maintained

But not everything needs to be a document. Sometimes the thing you're working with really is an application and not a document.

To me, one of the biggest problems with the current web is that we've commingled "app stuff" and "document stuff" so badly that browsers have been forced to become a shitty, inferior X-server (or Operating System outright), instead of being really good browsers. Browsers for browsing is great... browsers as a UI remoting protocol, is a bit janky.


ah. thanks.

"clickable"

Because you certainly can link to the wasm and js code that come with webassembly instantiateables.


I think the GP means "links" as in "clickable links", not "binary linker/loader".


I think he meant linkable in the web sense, ie, hyperlinks.


Sometimes a program running in the browser will be valuable when it's full window.

Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.


> Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.

I for one would, because the browser is an absolutely shitty interface. You're still forced into "there are tabs, which contain sandboxed documents" model of use. Interoperability is nonexistant, integration with machine capabilities is superficial and completely opaque to the user, the data model is hidden (where is my localStorage equivalent of the file browser again?), everything assumes you're constantly connected - it's a corporate wet dream, but for individuals, it's a nightmare.


Nothings perfect, though. If operating systems aren't, I wouldn't expect browsers to be either.

Creating mobile and/or offline first exoeriences for individuals isn't a pipe dream, it was possible and happened in the 90's when connectivity (dialup) informed content (largely offline or downloaded).

I'm not looking at replacement, only reasonable substitutes, which I think will become useful similar to using Google docs on mobile and web.


> where is my localStorage equivalent of the file browser again?

The Firefox developer tools have a "storage" tab that lets you inspect the content of various databases associated with a website.


Default-disabled, read-only and scope-limited to domains your current tab works with, but I guess it's better than nothing.


In my experience, the application-on-browser products consume far more CPU and RAM than the application-on-OS products. For me, that's a pretty big deal: I need the laptop to run as long as possible on a charge. Right now, I would complain if I _had_ to run a full version of Word or Excel in the browser.

Perhaps Web Assembly will drive this power usage down. But as it stands now, I actively avoid more than one of these app-on-browser products at a time.


Well, modern JS is MORE performant that classical scripting languages in benchmark cases, but the fact is that your browser freezes on half of js CRUDs that do data processing, while an analogous Perl application works at near light speed in comparison.

In half cases like that, stuff like sorting, list comparison, deduplication are done in a way that will score low mark even by standards of first year university program.

This is telling of web development industry's approach to doing business.

The most horrid examples of "LAMP sweatshops" of 10 years ago pale in comparison to what the industry has devolved into these days.

My own experience being an involuntary webdev for 3 years left me with following impressions:

1. Webdev is the largest commercial development niche in the whole tech industry. Everything else pales in comparison. It is also about making money quickly. A webapp or even a promo page SPA for a major consumer brand these days can cost up to $100k easily. $100k does not seem a lot to most people here, but such money can be well offered for a 1 month project for a team of of 6-8 professionals.

2. The industry is dominated by shops with 20 to 30 people headcount. Web dev studios generally don't scale much above that because of talent flight. Loss of a single senior dev who supervises hordes of lowest tier mule coders is often the end of a business for most of companies.

3. People from "big dotcom" world are near oblivious to ways of small web dev shops. For people who began their careers at 60k a year internships, getting into shoes of a person who does coding for 30k a year is impossible.

4. Talent flight and turnover is real.

5. This is all about really expensive quick and dirty code.

6. "The big dotcom" type of companies tried time and time again to tap into the market to extract rents, and with exception of Macrovision nobody ever succeeded. This is the reason Adobe is lobbying for unusable, unwieldy APIs in hopes of selling tooling for it.


If I could ask, where do you live? Your experiences don't reflect my own.


Practiced for near 3 years in Canada, and continued for a half a year after in China.

Quit webdev a year ago, now working in engineering consultancy.


I'd hope compiled binaries can run more efficiently than dynamically compiled Javascript over time.

Right now my mobile device is often tapped by Javascript that insists on running in the background.


> decrease our reliance on particular operating systems

By replacing it with a poor simulacrum of an operating system. Browser APIs are an inefficient subset of libc and bsd sockets offer.

And they provide near-zero interoperability with native applications. No filesystem access (beyond the clunky save-one-file dialog), no CLI, no IPC, nothing. That means browsers are building on top of operating systems while not interoperating with them.


> No filesystem access

This is a step forward not backwards. The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed. It leads to apps storing data in funny places, reading files they shouldn't, and general mayhem. Requiring the user to explicitly allow the app to access the file is a good thing.

There are some use cases that are hard to support (like being able to open all the files in a folder). But people are working on a solution.[1]

> No IPC

WebRTC while not the same and far more overhead (due to TCP sockets vs OS level sockets) can function very much like IPC. And there is nothing stopping a process running in a different browser (or even no browser at all) to connect to a webapp using WebRTC locally.

Additionally, if a new window is opened by Javascript and both pages are in the same domain + port (or subdomains of the same domain and you have access to the parent domain) you can communicate between the windows with simple Javascript function calls. And since browsers are moving towards a 1 process per window setup this is essentially IPC.

> That means browsers are building on top of operating systems while not interoperating with them.

While I can't argue with that. So is X Window. The abstraction between app and OS is a thick gray line not a thin black one.

[1] https://developer.mozilla.org/en-US/docs/Web/API/FileSystemD...


> This is a step forward not backwards. The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed. It leads to apps storing data in funny places, reading files they shouldn't, and general mayhem. Requiring the user to explicitly allow the app to access the file is a good thing.

Most apps being limited to their little part of the filesystem is not a problem. The problem is, now as a user, I can't access those files. I can't view them in a form that suits me, I can't use other applications to operate on them. The true form of the data is forever hidden from me, a secret of the application that "owns" it.


IMO that's a fairly easily solved problem. Browsers can add "localstorage browsers", you might even be able to do it in a browser extension.

I'd also love it if they gave that ability.


But that's the wrong direction. Instead it should map to a file tree that you can explore with your native file explorer and text editor. The browser becomes a silo for your data, inaccessible by every other application.


> The true form of the data is forever hidden from me, a secret of the application that "owns" it.

But that's been true for almost all users, and not just webapp users, forever.


Not necessarily. In the world of desktop software, most users know what a file is, and know that all the data of what they've been working at the moment on is contained within such a file. They know they can move this file around and possibly send to whomever they want. They also know that a file can be opened by multiple applications.

SaaS and web kill that.


I was thinking of all of the files in proprietary, and particularly binary, formats. Maybe some users know that even those files can be opened by multiple applications, when that's even true, but I suspect even more users don't even realize that almost all of their data is stored in a file somewhere, let alone where that file is in the filesystem and in what format it's stored.

Given the ubiquity of Word document and Powerpoint presentation files and the like, most users I'll grant you are aware of the files themselves, and the fact that they can be attached to an email. I'll even grant that a large fraction of those same users could answer 'yes' to the question 'Could these files be opened by another application?'. But almost none would be capable of doing anything with those files without an application that handles everything for them.

I don't dispute tho that an awareness of, let alone existence of, files in a filesystem is a significant benefit and not having access to them is a (relatively) significant loss.


> The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed.

Your are neglecting the option of exposing a limited subview of the filesystem like containers do.

> But people are working on a solution.[1]

The big red box on top says it's not on standards-track.

> WebRTC while not the same and far more overhead (due to TCP sockets vs OS level sockets) can function very much like IPC.

Can I send open file descriptors like I can with unix domain sockets? Can I share memory for low-latency atomics? Futexes?

> So is X Window.

Maybe if you're remoting X, few people do that these days. In practice X applications have access to the same machine that they are drawing on.


> Your are neglecting the option of exposing a limited subview of the filesystem like containers do.

No I'm not. I said the limitation is a step forward. I didn't intend to imply it is perfect. It is not at all perfect.

> The big red box on top says it's not on standards-track.

Correct, but most standards started as experiments by the browsers. I think it qualifies as "people are working on it" but means it is probably far from being standardized.

> Can I send open file descriptors like I can with unix domain sockets? Can I share memory for low-latency atomics? Futexes?

No. But you already knew that. But it does allow for data communication which in my opinion solves the 80% use case for IPC. From my experience (YMMV) the features you described while useful are not needed for most consumer apps.

Don't let perfect be the enemy of good.


> Don't let perfect be the enemy of good.

The problem isn't perfectionism, but that at least some of us believe that things are moving in the wrong direction - towards making vendors own everything, and end-users in control of nothing.


I wasn't implying replacing operating systems, but rather having the ability to substitute them, similar to how web apps can substitute for native apps.

I'm still optimistic that new forms of applications will emerge from this. There are serious pieces needing fleshing out, like file access.

The insecure interoperation between browsers and operating systems perhaps can be reimplemented through a newer more secure interface like wasm or the api.


Yeah, or a full version of a monero miner...


Different psuedo-VMs, I mean browsers, operate differently even on the same specs for various technologies (CSS, JS). They already act effectively like "particular operating systems," except they're less efficient and more obnoxious to work with.


[flagged]


This comment breaks a handful of guidelines and is not civil or substantive.

https://news.ycombinator.com/newsguidelines.html


Potentially fun questions: are there any “DOM-native JavaScript games”? I.e., games that manipulate the DOM for their “graphics”—or even have hypertext in place of graphics—rather than running in a canvas?

The only example I can think of is the Twine engine for Interactive Fiction.


Well there's this.

https://github.com/mozilla/BrowserQuest

Doesn't work in Safari.


You should look into Crafty, it’s a js game engine which can output to either the DOM or canvas, I’m not sure how popular it is anymore but quite a few games used it. There are demo games here http://craftyjs.com


Compiling to Wasm will only get easier. It's only hard now because the target is new and people are still adapting the tooling. There is no reason why it would be any harder than compiling for a machine.

Wasm will almost certainly lead to UI frameworks for the Web. JS people try very hard to get similar stuff, but the language is just not good enough; at the same time the desktop people that have this stuff is claiming for some way to use the same on the Web. People are already working on those frameworks, by the way.


Yes its bad for document markup, but I wouldn't waste time coding the next Excel in HTML and CSS, I'd just straight to a gui language with guaranteed cross platform rendering.


The OP is really spot on.

For the commenters that seem to have some underlying fear that WASM apps will be another incarnation of a "window in a window" or some horrible bitmapped graphics pane that does not fit into the web model:

WASM is just a CPU. It's a bytecode format for expressing low-level, high-performance programs. It comes "batteries not included"--intentionally. By batteries, I mean APIs. WebAssembly modules must import everything they need from the outside. When embedded in JS and the Web, the first and still primary use case of WASM, that means modules can import functionality from both JS and the Web, and call literally anything that JS can call. That means WASM can (though still somewhat clunkily) manipulate the DOM, WebGL, audio, service events, etc, through all of the same APIs that JS can do. There is nothing that prevents a WASM app from looking and feeling exactly like something written in JS.

To reiterate: WASM does not require you to drop down to canvas or render fonts yourself. You can call out to JS or direct to WebAPIs! (again, it just happens to be clunky to do this from C++.) But other languages are working on bindings that make this much nicer. Rust anyone? :)

What WASM gives the web is a proper layer for expressing computation. The APIs and paradigms that build on top of WASM are independent, swappable, interposable, by design. Because it's a layer for computation, and a low-level one, it is by nature language-independent. As Steve mentioned, adding languages to the web one by one does not scale. Thus WASM.


> To reiterate: WASM does not require you to drop down to canvas or render fonts yourself.

The fear isn't that it requires that. The fear is that it enables that.

The web is, and has been over the past two decades, in the constant state of war over control between publishers and consumers. People - and especially businesses - making pages would like to have 100% control over how the webpage/app looks and is being used. But the users would like to have some control over what they're viewing too[0].

The most widely known battle in this war is the battle for ad-blocking. The publishers want you to view lots of ads. You want just the content, without any of the ads. So far, the technology (and economics) favors the user, but it's not a given.

The balance of control on the web was always maintained by the technologies on which the web standardized on. Pure HTML, or even HTML+CSS, strongly favours the user. JavaScript tilts the balance significantly towards the publishers, as now they can (and do) generate content with code, which renders the page difficult to interpret and modify on the user end. One of the biggest complaints about Flash was how shitty the pure-Flash/mostly-Flash webpages were. That's not an intrinsic problem of Flash - this happened, because Flash gave the publishers too much control. And publishers (again, especially businesses) will use (and abuse) any control they're given.

The fear here is, that WASM against tilts the control in favour of publishing, which will lead to abuse and web becoming a much worse place for the consumers. If WASM will, by virtue of efficiency, enable publishers to embed a browser they control within the page, the publishers will use this, because this would single-handedly eliminate most ad-blocking, userscripting and scrapping efforts.

--

[0] - and the power users, like myself, would like to have 100% of that control - think of how much better the web would be if the data was always published in machine-readable format, without tons of bullshit paginations and stylistic choices to scroll and click through. For instance, when looking for current weather, I want to input my location and a time span, and get weather data. I want to be able to script that. I don't want to waste time looking at ads, pretty pictures, non-relevant text and links.


It's a big trade-off to be sure. On the one hand, I'm worried about the web becoming more closed-source and less hackable for all the reasons you've mentioned.

But on the other hand, I can't help see the enormous potential of a proper assembly language for web. Web technologies have felt like a massive hack for decades: tools designed for basic text formatting and a bit of interactivity which have been stretched in extreme ways to meet the needs of the modern web. Web applications are the most widely used software on the planet, and if you ask me it's about time developers will have the freedom to develop them in the language which makes the most sense for the task at hand rather than the only one which is available. And I am quite keen to see what kinds of new things will be possible when the ceiling is significantly raised for performance optimization.


Yeah, I feel the same sentiment you described, too. When building a web application, I'd prefer to use more powerful tools than JavaScript, and maybe a sane(r) set of libraries for user interface. There's also value in cross-compiling applications and games to web platform, because of ease of end-user deployment - for instance, games playable without explicit installation (of the game, runtime, and support libraries).

So I have really mixed feelings here. On the one hand, I appreciate the power WASM gives. On the other hand, I don't trust the majority of companies on the web to use that power responsibly.


> For instance, when looking for current weather, I want to input my location and a time span, and get weather data. I want to be able to script that. I don't want to waste time looking at ads, pretty pictures, non-relevant text and links.

I feel the same way. But those ads are there because that's the entire business model of people putting weather data out there for free. On most free sites, ads aren't just a sideshow, they're the driving engine. Take away the ads, and there goes the business model.

What we need is some other way to pay for the weather data. Maybe this could be a service provided by your ISP, like NTP or DNS. Or some third party subscription service. Or maybe even taxpayer funded. But if you're using a service that relies on ads as their revenue model, then expect to put up with ads. They're part of the deal.


That's a fair point, and a prelude to a much larger discussion about business models on the web. Suffice it to say, I'd happily accept some deal for compensating the data provider - be it ads, micropayments, or even regular subscriptions - if the resulting data was available in a) machine-readable form, and b) decluttered form on the webpage, so that I could read it efficiently (possibly with support of userscripts/userstyles). As a bonus, such sensible data display will save the provider's bandwidth costs, as on the typical site, 90+% of transferred bytes are not part of the content.


The web used to exist without ads, and it was more functional and useable for users. This idea that we need ads to fund webpages has always been nonsense. They are not at all part of the deal.


Micropayments is another possible solution that comes to mind. Say, pay one tenth of a penny every time you want to look at weather data. Assuming that we can come up with something that works for insignificant amounts and is fast, cheap, and secure to transact. If that is the case, you would just have to send a confirmation token with your first HTTP GET request to access a website with no ads. Competition would hopefully drive prices down and quality up.


they won't abuse because of GDPR. And people are already smarter, the same tricks from the past won't repeat. Browser vendors will be able to easily block too heavy WASM programs, for example which run too long. Or new laws will enforce that. [edit]: Or even very heavy WASM apps will require to be signed by certificate provider, otherwise user will be warned about risks. Just like HTTP vs HTTPS.


Javascript enables that too and some people do it.


I'm typically sharply critical of the web, but I think this comparison is kind of silly. The biggest problem with Java Applets and Flash is the security issues, which were largely caused by giving web pages access to a second, less secure sandbox. WASM stays in the same sandbox as the rest of the web. Flash also had the problem of being proprietary and non-standardized with only one implementation, something WASM does not suffer from.

For those worried about the "all WASM" pages looking like the old "all Flash" pages of yore, consider that Flash and Java applets had their own UI stack and WASM does not. The closest WASM has to that is OpenGL, but you've been able to make all-OpenGL apps with pure JavaScript for some time and it hasn't taken over the web with terrible sites yet. WASM code can interact with the DOM. I guess we could worry about native C/C++ GUI toolkits being ported to WASM, but the web community gets what you deserve for making Electron a thing.

I don't like JavaScript in general but I don't see how WASM is any worse, and if anything it's quite a bit better.


Flash player had a open access spec and there was more than just the Adobe Flash Player as implementations.

Pretty much every AAA video game of a certain period would have been using scaleforms Flashplayer for their user interface.


> Flash player had a open access spec and there was more than just the Adobe Flash Player as implementations.

No, the compiler maybe, Action Script maybe, but not the player. The player is entirely closed source and there is no open spec for the player. Or you need to show it to me.

> there was more than just the Adobe Flash Player as implementations.

Only Adobe's implementation could run all swf files. Scaleform was not an alternative flash player. Any attempt at creating an alternative and feature complete flash player failed.

Flash the tech is not open, at all.


The player is closed source but the SWF spec is open.

https://www.adobe.com/devnet/swf.html


Doom 3 BFG comes to my mind.


Why on earth is this being down voted?


WASM has all the security problems of flash, and then it multiplies them, by making WASM content linkable.

So someone makes a game, and they use this very useful WASM library over here. Only that library exploits spectre or meltdown to steal data. Or maybe it just silently hoses your machine by targeting the new WebGL shaders? Or any myriad number of other things.


Exploiting browser bugs is still just exploiting browser bugs and this is already a problem for JavaScript, WASM doesn't make it worse. Flash introduces a second, black box sandbox implemented by morons.


I don't think you're understanding.

Let me be explicit. There are changes in WASM specifically made that render spectre and meldown mitigations useless. (ie-Browser makers put in spectre and meltdown mitigations, and changes in WASM allow WASM content to get around those mitigations.) Developers cheer the changes, because they make WASM more useful, and to be fair, browser mitigations of spectre and meltdown type bugs make WASM far less performant. But changes which render those mitigations useless are dangerous no matter what your opinion is on how useful WASM should be.

Edit: Should probably mention that the upcoming changes include threading and shared memory. Implemented in a way that enables CPU side channel attacks. (Probably because there is no other way to get threading and shared memory without everything slowing to a crawl, but still.)


> (ie-Browser makers put in spectre and meltdown mitigations, and changes in WASM allow WASM content to get around those mitigations.)

Could you be more specific? I implemented Chrome's Spectre mitigations for WASM and I'm not sure what you are referring to.

> Should probably mention that the upcoming changes include threading and shared memory.

These only give you a high-resolution timer mechanism--which you have to build yourself and is possible in JS with SharedArrayBuffer before. So WASM is no worse in this respect.


It's probably a good thing that a standard isn't designed around Vulnerability of the Day, no?

Some time in the (near?) future those vulnerabilities will just be a footnote in some history book and having to support mitigations forever (due to backwards compatibility) probably isn't the best thing to encode into a standard.

I'm sure some intrepid security researcher will find some new Vulnerability of the Day which can also be exploited through wasm and then they will need to add mitigation to the standard yet again ad nauseam until it becomes some giant bloated unusable mess for which we'll need yet another standard.


Maybe I don't understand Spectre and Meltdown, then. I wasn't aware it was the browser's business to patch that, I thought it was kernel and microcode patches?


Kernel and microcode patches allow the OS to control for meltdown a bit better.

The problem with Spectre is that it is a bit different. Array bounds class might be able to be patched in the long run, but even from the beginning, people have suspected that the other classes of spectre would be more slippery. And true to form, new spectre type variants continue to be discovered and disclosed by Intel even to this day. (SpectreRSB for instance.)

In short, it's not a simple patch that will fix all these strains of bugs out there. In light of that fact, browser makers have implemented mitigations at the application level which are a bit more heavy handed. But, as you can imagine, this is going to impact all content inside the browser. Which brings us to the WASM content, and the threading and shared memory changes. And you know the rest of the story from there.


WASM itself isn't Java, Flash, or Silverlight, but isnt it another step in the ongoing multiyear process of replicating the features of those technologies in a way that they tried to accomplish: compile to one format and run it on multiple platforms?

I think so, and the managers at Adobe and Sun must be kicking themselves for not somehow getting their runtime more open, modular, and standardized now that we see write once run anywhere with a few system hooks is all we need.

Then again... It was a different world in the mid 2000s. The web standardization process? Ha, what was that?

On a side note, I'm seeing more articles pointing out that WASM runs in the JS VM. Doesn't negate the whole advantage of speed for WASM?


I think it's a step forward in that it's more integrated into the platform. Remember when TCP/IP used to be an add-on for an operating system?

> managers at Adobe and Sun must be kicking themselves

Both tried. As I recall Sun were blocked by Microsoft, and Flash was bundled as standard with Netscape from about 2001 onwards. Steve Jobs killed that stone-dead when he point-blank refused to support it on iDevices.


Some companies have all the right ideas and for whatever reason still can't execute.

Adobe AIR beat things like Electron and PhoneGap to market by years. IMHO the issue with Adobe is this insistence on 'open' still having various vary opinionated elements. Adobe Air for example had a lot of good ideas but still attempted to evangelize Flash and ActionScript. I _think_ MS is trying to pivot of that grave now with .NET Core. Time will tell if the Mono-to-Wasm or .NET Core Native projects have legs.

I was so very excited about Adobe Air and wrote a production application with it in 2009.

I _think_ a sweet spot for WASM data processing. The data visualization space should explode once I can with data in the browser at near native speed.

https://en.wikipedia.org/wiki/Adobe_AIR


To be fair, AIR was not the first in the domain. Mozilla had XUL/XULRunner ~15 years ago, which could be used to quickly develop kick-ass cross-platform applications in JS (and is still, by and large, the base of Thunderbird and Firefox).

Sun thought that they had something like that with Java Apps ~20 years ago, except they forgot to make installation and UX compelling, and the memory requirements were unacceptable for the time.


I remember trying to do stuff with Adobe AIR and it just felt like a collosal waste of time. As soon as you tried to do anything that interacted outside of their sandbox you were severely limited. I remember some guys did a hack called cairngorm that I looked at but it seemed quite cumbersome. Then there was support, I think it was only after a few years they just gave up and spun it off to Apache ... you need to stick at it longer than that to establish yourself ...


Remember when Adobe AIR was going to come to Android? That would have been an amazing write once run anywhere experience.


You can build Android apps (and iOS too!) with Adobe AIR today. In fact, it's been possible since about 2010.


It did come to Android.

For a while before being terminated, Flash got a native code backend.


If the Mac version hadn’t been a crashy dumpster fire, Steve Jobs might not have done that. Flash on Macs was always mediocre.


By blocking flash apps he increased the motivation for migration to native apps, so there was a sound commercial basis for this as well.


Macs have always been throttled frying pans. They sacrifice much performance for the sake of thinness and design. No wonder Flash always performed badly on mac devices.


Nope. The current "form over function" mentality is definitely a post-Jobs and post-Flash thing.


Not always. They were great little machines 10 or so years ago.


You did see that the recent MBP throttling issue was a software bug and has been fixed, yes?

Admittedly they could be clocked slightly higher if they were larger with better cooling, but they're by no means slow computers.


Flash on Mobiles was two sides, Adobe had a lot of difficulty to implement multi-touch correctly.


> On a side note, I'm seeing more articles pointing out that WASM runs in the JS VM. Doesn't negate the whole advantage of speed for WASM?

It basically means that WASM has the same safety/security model as the JS VM. Just like JS, it is compiled to native code (I'm simplifying a bit, of course) before being executed. However, where JS is one of the languages with the most complicated semantics around, which makes it really, really hard to compile efficiently, WASM has extremely simple semantics and is designed to be really, really easy to compile efficiently.


Thanks for the differentiation. I guess when I've read "WASM" will be native code I expect it to be as in "C" or "C++" native. Not native to a VM


WASM and JS are both JIT compiled in all major browsers which means that they compile to the same kind of native code that C and C++ do, they just do so as the program is running rather than in advance.


What's even better about WASM than JS in this case is that it can be compiled as it is being loaded. With JS, the entire file needs to be downloaded before being executed, but that's not the case for WASM, resulting in even more performance improvements.

https://hacks.mozilla.org/2018/01/making-webassembly-even-fa...


> Doesn't negate the whole advantage of speed for WASM?

Well, here's a benchmark of asm.js JavaScript versus WebAssembly in a real world application:

https://pspdfkit.com/blog/2018/a-real-world-webassembly-benc...

The WebAssembly version outperforms the asm.js version.


If I'm not mistaken, in those plots lower is better. Only WASM on FF has a clear lead?


No, wasm, like asmjs, is designed to be compiled into native code once validated. Unlike asmjs, it doesn't also require a long parsing step. It uses the same code paths used to emit native code from the JS VM JIT.


It’s bootstrapping. You’re building a new thing (example: C++) that works a lot like the old thing (C). So you build a wrapper (Charm) that works on top of the old thing so you can get the conversation going, expand your capabilities and recruit.

Over time you do more of your own thing and you or someone else splits these two pieces of code into three smaller ones. Like the LLVM backend that can be fed by a C or C++ frontend.

As webasm becomes a competitive advantage you should expect to see people split up their javascript VM into three pieces, and Javascript and Webassembly running as peers instead of guest and host.

In a very small way, we kind of saw a similar thing with JSON. JSON was just a strict subset of Javascript and you could emulate it on old browsers with a linter in front of an eval(). Now it’s its own thing.


> I'm seeing more articles pointing out that WASM runs in the JS VM.

It runs on the JS sandbox, but it can not be efficiently emulated by the JS CPU. VM is an ambiguous term.

Browser developers are talking about running JS in the Wasm VM. That will probably be reasonable very soon.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: