Flash was more like using Premiere where you just edited your timeline with a bit of interactivity sprinkled over it, no movie editor ever had to get his hands dirty with some kind of scripting language or low level file formats just to edit a movie.
I had a lot of "oh wow" moments at the end of the 90's and beginning of the 00's with Flash. It was like the web was warped into the future. Nowadays you can achieve the same but not as elegant. It kind of reminds me of how PC's had to catch up with the Amiga for years. Perhaps starting with Wing Commander things were really on par or better, almost 8 years later.
Flash was like using poor quality native apps, which was a step backwards from a browser.
Link me to the most user-hostile Flash-based web page you can find, and I'll watch a ten year old hour-long video presentation from a Gnash developer explaining their 100-year plan to ship a usable open Flash runtime alternative.
Flash didn't have any of that, and was much slower to load than modern JS websites.
And this is not some student's first approach at modern web technologies, its fucking YouTube from google. They can't get their shit working on their own browser. I have yet to see an SPA with a convincing UX that's more than just some wannabe webdev's "about" page.
It's sad that Google is revered by the developer community as a role model for software engineering because their frontend web work has—at least for the last ~10+ years—always been terrible and a really great example of worst practice.
Gmail's early HTML version is possibly the last faded memory of a quality frontend product coming from Google. Everything they've created since the advent of GWT has been aggressively anti-user and anti-interop. Gmail took a long time to get full browser support and a longer time to play will with back buttons, etc.—meanwhile the Gmail interface has slowed and bloated with each iteration; the latest bordering on unusability on my very new midrange laptop. Wave never worked in anything but Chrome, the same is true of the early iterations of most of their large newly released products over the years. The Google homepage provides a totally inconsistent experience across browsers—on mobile I see three different results views in three different browsers, two are Blink-based! Why doesn't search by image work on mobile? We're thankfully no longer lumped with the distaster desktop experience the was Google Instant Search.
Similarly, Microsoft and Apple don't have great histories here. When looking for good development practices, you should always look at the example set by companies that need to compete. Monopolies don't need usability.
This is a great example of Google's spotty frontend work -- especially since you can simply tap the share button on iOS, and then scroll over to "Request Desktop Site". And voila, image search now works perfectly. You can even click the camera icon to upload an image directly from your phone. And yet, this functionality is completely hidden by default on mobile.
"Gee, I remember seeing this unique comment on a youtube video. Let me search for it, maybe some web-crawler has indexed it and made it available to the rest of the web. Nope."
"Oh I remember the video, it was XYZ. I'll just go there and find the comment. Oh, it's not at the top of the comments, and there are 5600 comments."
"It's okay, I'll just ctrl-F for that specific word I distinctly remember was used in that comment. Nope, ctrl-f only finds what's loaded in the Dom, you've got to scroll!"
"Hmm, maybe I can just keep scrolling for a while till I find it. Scroll, wait. Scroll, stop, loading icon, scroll some more, wait, scroll some more, wait."
Hey, at least I can share a specific time-stamp location within the video on Google+ and Facebook!
Building a usable experience in Flash was possible, but sufficiently difficult that almost none did.
At the start of 2005, I was helping lead a team build a highly interactive experience for a major car company using Macromedia Flash and Flex 2. When we launched the site we had full cross-browser pixel accuracy, fully supported URL deep-linking, bookmarking, page history navigation, web crawling, keyboard navigation, screen reading support, interactive video, highly animated experiences and even had fully integrated web mapping using a beta version of Microsofts first interactive map tech (it eventually became branded as Bing maps).
What we built then was possible only because of Flash, there was no way we could have created the complete experience in JS/HTML/CSS. Granted, we could have done a lot of it in native web tech (and in some cases, we had to under the hood), but to make it fully pixel accurate cross-browser would be tripled the dev & testing time. On top of that, some of the features would have been impossible without Flash.
Let's recall where we were in 2005. Chrome was still 3 years away (it was released in 2008), Firefox was still version 1.0 (1.5 didn't release until Nov. of that year), Gmail was just released into beta (you had to have a friend to get access), Google Maps was still in an experimental beta and was truly pushing the boundaries of what JS/HTML could do. The browser history API didn't exist so we had to do crazy iFrame hacks to create deep-links and history (this was true of any app, no matter what tech). There was no video in browsers without a plugin (HTML5 spec wasn't finalized until Oct. 2015). There was no such thing as CSS animations, they got their first release in Firefox 5 (June 2011).
So, were Flash apps trash? Absolutely, but not all of them. That's like claiming that modern browsers solve it all. We still have massive load times (look at how much time and effort is put into optimizing content delivery), the cross-browser and backward compatibility is a nightmare, and way more of a challenge then it ever has been. In some ways, modern web development is much better than Flash development was, but to be perfectly frank, we have a LONG way to go. Honestly, we haven't caught up to where Flash was 14 years ago.
But, we are getting there for sure. I can see why Steve Klabnik is so excited about WASM. I can envision where it is going and it reminds me of where Flash was trying to go before it was slaughtered by Adobe. It's an exciting time for sure, but we should also look back at where we came from and instead of just stating Flash is trash and it caused the web to be terrible, we should also look at what it did right and what it allowed us to create before we throw it out with the bathwater.
So... nice one there, green user. You really got'em!
But Flash games were a boon. Everyone with an idea was able to make it into a game. I spent a lot of time playing games in Kongregate and similar sites. Many were really interesting, well worth the effort of sifting through the rip-offs and trash.
The internet speeds were much slower then, the difference mattered a lot. Also note, from the description:
"Sadly, it's not possible to replicate the pop-up mouse-overs from the "Special Edition" - though I was able to append the "deleted scene" presented in that version. If you want to see that version - or the "Fire BAD!" flash video game where you attempt to extinguish flames on James Hetfield (tragedy + time = humor), you'll have to find the original .SWF files and have a way to play them."
Spend time watching some of each year's WWDC talks and noting how much effort Apple engineers give to fine-tune certain APIs to enable app developers to optimize battery use, disk access, or graphics rendering. State-of the art stuff. And then your Electron app just ignores all of that.
Manually targeting multiple platforms is really hard. How could one person possibly keep track of all the battery-saving API changes in OSX, iOS, Windows, Linux, and Android?
Yeah, it's annoying that Electron apps are so bloated. But the alternative would be > 50% of people can't use those apps at all because you only targeted one platform.
Which I tend to do, particularly since that "one platform" is almost always either Windows or macOS, neither of which I use on a daily basis (and neither of which I have any desire to use on a daily basis).
Meanwhile, there's such a thing as cross-platform GUI applications that don't try to fit an entire web browser into them. Especially if your programming language of choice doesn't require precompilation, frameworks/toolkits like Qt and GTK and Tk are perfectly viable for cross-platform development, at least on the desktop (which is usually where folks are using the likes of Electron or CEF anyway). They also tend to perform significantly better and stay much closer to the look-and-feel of the rest of the operating system (no, I don't care if you think you know better than me about my sense of style; if your app doesn't respect the look-and-feel of the rest of my system, then it sticks out like mold on a slice of Wonder Bread).
I don't think any of them target HTML/JS, though.
You don't. You just use system-provided APIs and get the improvements for free whenever the system frameworks are updated to be more efficient.
I'm not saying everything should be Electron, but it certainly has usecases just like native does.
But what's your game plan under other circumstances? Do nothing?
Just because an enterprise has funding doesn't mean it's wealthy in other, more scarce things like organizational capital. In fact, I wouldn't be surprised if all the client developers at Slack would love to split off into dedicated platform teams and do a rewrite. We as developers love that shit.
But you should try to understand why these enterprises make these trade-offs and why they are still in these positions in spite of how much disposable money you think they have.
Electron wrappers for originally web apps, on the other hand, are usually horrible user experiences—often they don’t even support right-clicking elements, or clicking on links to open them in a browser.
No, they don't. Ever tried to drag and drop into an Electron app? How about used VoiceOver?
I don't think more than a handful of the games and little videos that I enjoyed as a kid on newgrounds and addicting games would have been made had they not had such a low barrier to creation.
I do kernel/firmware development now, so I'll be the first to admit that this value proposition isn't valid for all domains.
But sometimes, for some use cases, just having a simple to use scripting environment is totally the way to go. Particularly if it's sandboxed well and can't really harm the end user.
Flash for the win! Till the day I die.
AS3 was amazing. When Typescript came out, it was a lot like
AS3. I was convinced and became a believer in TS and never looked back.
In those days there was no notion of SPA or other contemporary well-established patterns for online interaction.
I authored several commercial sites using Shockwave and the like and it enabled us to give clients the ability to author media-rich and design-heavy presentations in a fashion they were familiar with from card decks applications. At the time doing so in a 'sexy' way made them stand out.
It was a cul de sac, but it was a worthwhile direction to test in terms of UX.
That it was proprietary was weighted differently in those days. The world was dominated by closed standards and open source was still vestigial especially in commercial applications.
Not arguing with you about the rest. As a web developer I still haven't made a website in the last 5 years that was as visually impressive as my Flash projects from before, and sometimes I miss the visuals....
But people forget that the AWESOME intro that you wait two minutes of loading to see... is only cool the first couple times. Many many people where losing 5 minutes of loading time, just to visit websites that they used EVERY day.
By virtue of being installed on 98% of all computers, Flash video was a de facto standard video target. One that could be played on Linux without getting into legal grey areas too!
Without Flash looming over them, IE would never have supported cross-platform video - this would have been Microsoft undermining the WMP format.
I think that we would have gotten there eventually, but probably not as fast. The close - but not quite good enough - implementation of video in Flash was as much of a catalyst as it was a crutch to lean upon.
I agree with you that it's a little sad that amateur animation fell a bit out of vogue, but the tools are right there for anyone whos interested to pick up.
I have memories of being wowed by what creative people could do with flash in the 90s and I would not call that amateur animation. I think the problem (1) is we currently seem to be lacking in tools that let people skilled in the visual arts create things without knowledge of the mechanics, so to speak. It's like if in order to write a great song someone would first have to understand how a musical instrument is built. I would not call Amanda Palmer or Lee Ranaldo amateur yet I bet they probably can't build their own instruments.
(1) I must make the disclaimer that for a couple of decades already I've been working only on the backend, so I may surely be missing part of the picture here.
Edit: Fixed footnote mark.
And I know there are still issues. Given today's speed, issues are more about security and accessibility.
My point was just that the artists using flash to create good content were not amateur animators.
Edit: My point was wrong as it was based on a misunderstanding of the use of the word 'amateur' by the person I was replying to.
I spent hundreds of hours in Photoshop as a teen, but I likely would’ve have bothered with it at all if it weren’t easily piratable.
That's a problem generally with cloud services. Pricing is mostly done considering some countries and contexts, but totally ignoring other. With physical equivalents some years ago, local distributors made deals with their home offices to adjust for this, but that's not something usually done anymore.
Even now html/J's can't do all of the things and most of the things that you can do, are not as fast. While browsers are stuck with legacies to uphold. Flash had no dom to worry about, untyped language (as3) or had css holding it back.
General argument was flash sucks because people make terrible content with it. Which is like saying I hate having hands because I trip things over.. so no limbs = no mess PERFECT.
In turn I think it helped push native apps. Since plain Js/html app just sucked in comparison when it comes to experience and capabilities.
Flash should have been open sourced. Hopefully with webgl and web assembly someone can step in and create something similar
Taking a step back, man what a different world it was back then. I'd fire up MS Frontpage or Macromedia Dreamweaver and go to town. The expectations have changed on the maintainability, usability, and functionality parts so I understand why we are where we are today. Both I do miss those simple days.
Steve Jobs and browser security holes killed Flash, and no current open web platform covers all the use cases mxml and AS3 covered for cross-browser development. I could analyze audio channels, run lightweight process concurrency via green-threading, store user files, do i81n translation, streaming websockets and work with actual binary data types in the browser in 2006. I could trigger actions based on events in video and audio streams. I had consistently applied css with animations across components in 2006. I had reusable web components in 206. It's now 2018 and we still don't have cross-browser support for all of that. Oh, and I could run my app on the desktop in offline mode and in the browser.
Security was an issue. Looking back now, I think an Android-like permissions scheme is what it, and the browser, needs to fulfill the promise of write once run anywhere that the browser and the web tends to make.
I remembered recently that Haxe was initially based off of MTASC (something I used in the later AS3 days), and checked it out. It's quite a stable ecosystem that feels very familiar in syntax. Add in HaxeFlixel, and it's almost like Flash never left.
flash was a good tool for the websites they used to create, usually graphics and animation heavy websites low on interactivity. they used the export function to generate the swf including the html index page to embed it.
over the years focus shifted more and more to dynamic websites with content generated from databases and they were mostly lost there. dynamic content (loaded by http requests from databases) in flash usually turned into a huge pain in the ass after a while. for those projects we switched to a traditional website model where dynamic content mostly wasn't loaded into flash, instead it was a html-by-php website where flash animations replaced header jpegs (i.e. animated passive content).
so, in our case, flash was a good replacement for animated and slightly interactive but not dynamically generated content.
A lot of users did, too. But it was usually along the lines of "Oh, wow. This page has flash. Well, I guess I'll go get a Coke while the Flash plugin loads into the browser and my computer can't go anything else. If I'm lucky, the whole thing won't crash and take all my work with it by the time I get back."
We romanticize the past.
Java applets OTOH had the exact experience you describe. Those were absolutely terrible.
Not Adobe Edge was killed dead, it was not rolled into Adobe Animate. Edge did export animation with jQuery and DOM element, while Adobe Animate exports animations in pure canvas.
But it would still come with many of the drawbacks.
is supposed to be an implementation of the Flash VM in typescript, But it can't even run in latest Firefox browser anymore apparently and no commits from 2 years.
It is the return of Flash, and that's a bad thing. We thought we'd won the war, but really we just won a battle.
(Edit: Typos. I should know better than to post from my phone by now. Grrr...)
[EDIT]: Steve is right of course, and I misspoke here, "WASM is still able to drive the DOM" is closer to what I meant to say.
What difference do you see?
Apps are apps.
Sometimes both are in the browser.
It'd be great if all "documents" had an HTML version, with minimal JS. For accessibility, searching, deep linking, etc.
Overall though, I think wasm shouldn't be replacing HTML/JS.
In the ancient web world, the site author wrote HTML to describe the data she wanted presented and the browser took care of making it accessible. But authors (especially companies) wanted detailed control of how their sites looked, so they turned to flash etc.
JS has long been re-playing this trend in slow motion -- moving away from web pages being interactive documents presented by the GUI app called browser and towards them being stand-alone GUIs like in flash.
Sure, there's "fuckAdblock" but that shortly spawned "FuckFuckAdblock". It's a whole different case when the very browser prevents the content from being tampered with.
It would be an interesting experiment to transpile a less complex browser, like Arachne, over to WASM as a proof of concept to demonstrate how awful this kind of future would be. (Yet another "if I had some free time" wishes... >sigh<)
Don't. Most people will ignore the demonstration, but someone greedy will fork the project, build a library out of it, and start selling as a product to ad networks and media companies.
And people growing up with today web-first, mobile-first computing model have no clue of the power and capabilities computers have. With data being owned and hidden by apps/webapps, limited interoperability, nonexistent shortcuts, little to no means of automation of tasks, people won't even be able to conceive new ways to use their machines, because the tools for that aren't available.
Now ordinarily, on PCs, you do that by means of simulated keypresses and mouseclicks, using scripting capabilities of the OS or end-user software like AutoHotkey. In the web/mobile-first, corporate-sandboxed reality, I can't imagine this capability being available, so Arduino and robot hand it is.
(But yeah, bastards will eventually put a front-facing depth-sensing camera, constantly verifying the user, arguing that it's for "security" reasons.)
I am however scared that HTML will go the way of Gopher. Why would anyone care to maintain boring hypertext documents when we can have app of the day. Marketing departments everywhere tend to turn the web into Blinkenlights.
How many support documents of more than 15-20 years ago are you able to still find using the old links? So many sites are working as dumb front-ends for a database.
The information retrieval and persistence over time is not something many worries about.
The cat is for sure out of the bag. I just hope what was still can survive.
JS or Wasm can't create documents by themselves, they still need a DOM. Even if it's a 2D canvas or some WebGL canvas, it's still a DOM element. Or even if it's just an iframe that loads some blob, on the top level it's still a DOM element. And as such it can be inspected and controlled.
Not if the content is decrypted by EME that's not fully controlled by the browser.
I hope so too, but as a member of predatory and territorial species, the cat will most likely keep on killing everything else around it.
WebAssembly enables load-time and run-time (dlopen) dynamic linking in the MVP by having multiple instantiated modules share functions, linear memories, tables and constants using module imports and exports. In particular, since all (non-local) state that a module can access can be imported and exported and thus shared between separate modules’ instances, toolchains have the building blocks to implement dynamic loaders.
The code is fetched via URLs so you can link to it in that sense, too.
It's also cacheable:
The point was more that once webpages become applications running on the client (think single page apps), the natural document metaphor of web pages and the tooling built on it (hyperlinks, forward/back, bookmarks, history) falls apart unless you do extra work to ensure that experience is maintained.
But not everything needs to be a document. Sometimes the thing you're working with really is an application and not a document.
To me, one of the biggest problems with the current web is that we've commingled "app stuff" and "document stuff" so badly that browsers have been forced to become a shitty, inferior X-server (or Operating System outright), instead of being really good browsers. Browsers for browsing is great... browsers as a UI remoting protocol, is a bit janky.
Because you certainly can link to the wasm and js code that come with webassembly instantiateables.
Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.
I for one would, because the browser is an absolutely shitty interface. You're still forced into "there are tabs, which contain sandboxed documents" model of use. Interoperability is nonexistant, integration with machine capabilities is superficial and completely opaque to the user, the data model is hidden (where is my localStorage equivalent of the file browser again?), everything assumes you're constantly connected - it's a corporate wet dream, but for individuals, it's a nightmare.
Creating mobile and/or offline first exoeriences for individuals isn't a pipe dream, it was possible and happened in the 90's when connectivity (dialup) informed content (largely offline or downloaded).
I'm not looking at replacement, only reasonable substitutes, which I think will become useful similar to using Google docs on mobile and web.
The Firefox developer tools have a "storage" tab that lets you inspect the content of various databases associated with a website.
Perhaps Web Assembly will drive this power usage down. But as it stands now, I actively avoid more than one of these app-on-browser products at a time.
In half cases like that, stuff like sorting, list comparison, deduplication are done in a way that will score low mark even by standards of first year university program.
This is telling of web development industry's approach to doing business.
The most horrid examples of "LAMP sweatshops" of 10 years ago pale in comparison to what the industry has devolved into these days.
My own experience being an involuntary webdev for 3 years left me with following impressions:
1. Webdev is the largest commercial development niche in the whole tech industry. Everything else pales in comparison. It is also about making money quickly. A webapp or even a promo page SPA for a major consumer brand these days can cost up to $100k easily. $100k does not seem a lot to most people here, but such money can be well offered for a 1 month project for a team of of 6-8 professionals.
2. The industry is dominated by shops with 20 to 30 people headcount. Web dev studios generally don't scale much above that because of talent flight. Loss of a single senior dev who supervises hordes of lowest tier mule coders is often the end of a business for most of companies.
3. People from "big dotcom" world are near oblivious to ways of small web dev shops. For people who began their careers at 60k a year internships, getting into shoes of a person who does coding for 30k a year is impossible.
4. Talent flight and turnover is real.
5. This is all about really expensive quick and dirty code.
6. "The big dotcom" type of companies tried time and time again to tap into the market to extract rents, and with exception of Macrovision nobody ever succeeded. This is the reason Adobe is lobbying for unusable, unwieldy APIs in hopes of selling tooling for it.
Quit webdev a year ago, now working in engineering consultancy.
By replacing it with a poor simulacrum of an operating system. Browser APIs are an inefficient subset of libc and bsd sockets offer.
And they provide near-zero interoperability with native applications. No filesystem access (beyond the clunky save-one-file dialog), no CLI, no IPC, nothing. That means browsers are building on top of operating systems while not interoperating with them.
This is a step forward not backwards. The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed. It leads to apps storing data in funny places, reading files they shouldn't, and general mayhem. Requiring the user to explicitly allow the app to access the file is a good thing.
There are some use cases that are hard to support (like being able to open all the files in a folder). But people are working on a solution.
> No IPC
WebRTC while not the same and far more overhead (due to TCP sockets vs OS level sockets) can function very much like IPC. And there is nothing stopping a process running in a different browser (or even no browser at all) to connect to a webapp using WebRTC locally.
> That means browsers are building on top of operating systems while not interoperating with them.
While I can't argue with that. So is X Window. The abstraction between app and OS is a thick gray line not a thin black one.
Most apps being limited to their little part of the filesystem is not a problem. The problem is, now as a user, I can't access those files. I can't view them in a form that suits me, I can't use other applications to operate on them. The true form of the data is forever hidden from me, a secret of the application that "owns" it.
I'd also love it if they gave that ability.
But that's been true for almost all users, and not just webapp users, forever.
SaaS and web kill that.
Given the ubiquity of Word document and Powerpoint presentation files and the like, most users I'll grant you are aware of the files themselves, and the fact that they can be attached to an email. I'll even grant that a large fraction of those same users could answer 'yes' to the question 'Could these files be opened by another application?'. But almost none would be capable of doing anything with those files without an application that handles everything for them.
I don't dispute tho that an awareness of, let alone existence of, files in a filesystem is a significant benefit and not having access to them is a (relatively) significant loss.
Your are neglecting the option of exposing a limited subview of the filesystem like containers do.
> But people are working on a solution.
The big red box on top says it's not on standards-track.
> WebRTC while not the same and far more overhead (due to TCP sockets vs OS level sockets) can function very much like IPC.
Can I send open file descriptors like I can with unix domain sockets? Can I share memory for low-latency atomics? Futexes?
> So is X Window.
Maybe if you're remoting X, few people do that these days. In practice X applications have access to the same machine that they are drawing on.
No I'm not. I said the limitation is a step forward. I didn't intend to imply it is perfect. It is not at all perfect.
> The big red box on top says it's not on standards-track.
Correct, but most standards started as experiments by the browsers. I think it qualifies as "people are working on it" but means it is probably far from being standardized.
> Can I send open file descriptors like I can with unix domain sockets? Can I share memory for low-latency atomics? Futexes?
No. But you already knew that. But it does allow for data communication which in my opinion solves the 80% use case for IPC. From my experience (YMMV) the features you described while useful are not needed for most consumer apps.
Don't let perfect be the enemy of good.
The problem isn't perfectionism, but that at least some of us believe that things are moving in the wrong direction - towards making vendors own everything, and end-users in control of nothing.
I'm still optimistic that new forms of applications will emerge from this. There are serious pieces needing fleshing out, like file access.
The insecure interoperation between browsers and operating systems perhaps can be reimplemented through a newer more secure interface like wasm or the api.
The only example I can think of is the Twine engine for Interactive Fiction.
Doesn't work in Safari.
Wasm will almost certainly lead to UI frameworks for the Web. JS people try very hard to get similar stuff, but the language is just not good enough; at the same time the desktop people that have this stuff is claiming for some way to use the same on the Web. People are already working on those frameworks, by the way.
For the commenters that seem to have some underlying fear that WASM apps will be another incarnation of a "window in a window" or some horrible bitmapped graphics pane that does not fit into the web model:
WASM is just a CPU. It's a bytecode format for expressing low-level, high-performance programs. It comes "batteries not included"--intentionally. By batteries, I mean APIs. WebAssembly modules must import everything they need from the outside. When embedded in JS and the Web, the first and still primary use case of WASM, that means modules can import functionality from both JS and the Web, and call literally anything that JS can call. That means WASM can (though still somewhat clunkily) manipulate the DOM, WebGL, audio, service events, etc, through all of the same APIs that JS can do. There is nothing that prevents a WASM app from looking and feeling exactly like something written in JS.
To reiterate: WASM does not require you to drop down to canvas or render fonts yourself. You can call out to JS or direct to WebAPIs! (again, it just happens to be clunky to do this from C++.) But other languages are working on bindings that make this much nicer. Rust anyone? :)
What WASM gives the web is a proper layer for expressing computation. The APIs and paradigms that build on top of WASM are independent, swappable, interposable, by design. Because it's a layer for computation, and a low-level one, it is by nature language-independent. As Steve mentioned, adding languages to the web one by one does not scale. Thus WASM.
The fear isn't that it requires that. The fear is that it enables that.
The web is, and has been over the past two decades, in the constant state of war over control between publishers and consumers. People - and especially businesses - making pages would like to have 100% control over how the webpage/app looks and is being used. But the users would like to have some control over what they're viewing too.
The most widely known battle in this war is the battle for ad-blocking. The publishers want you to view lots of ads. You want just the content, without any of the ads. So far, the technology (and economics) favors the user, but it's not a given.
The fear here is, that WASM against tilts the control in favour of publishing, which will lead to abuse and web becoming a much worse place for the consumers. If WASM will, by virtue of efficiency, enable publishers to embed a browser they control within the page, the publishers will use this, because this would single-handedly eliminate most ad-blocking, userscripting and scrapping efforts.
 - and the power users, like myself, would like to have 100% of that control - think of how much better the web would be if the data was always published in machine-readable format, without tons of bullshit paginations and stylistic choices to scroll and click through. For instance, when looking for current weather, I want to input my location and a time span, and get weather data. I want to be able to script that. I don't want to waste time looking at ads, pretty pictures, non-relevant text and links.
But on the other hand, I can't help see the enormous potential of a proper assembly language for web. Web technologies have felt like a massive hack for decades: tools designed for basic text formatting and a bit of interactivity which have been stretched in extreme ways to meet the needs of the modern web. Web applications are the most widely used software on the planet, and if you ask me it's about time developers will have the freedom to develop them in the language which makes the most sense for the task at hand rather than the only one which is available. And I am quite keen to see what kinds of new things will be possible when the ceiling is significantly raised for performance optimization.
So I have really mixed feelings here. On the one hand, I appreciate the power WASM gives. On the other hand, I don't trust the majority of companies on the web to use that power responsibly.
I feel the same way. But those ads are there because that's the entire business model of people putting weather data out there for free. On most free sites, ads aren't just a sideshow, they're the driving engine. Take away the ads, and there goes the business model.
What we need is some other way to pay for the weather data. Maybe this could be a service provided by your ISP, like NTP or DNS. Or some third party subscription service. Or maybe even taxpayer funded. But if you're using a service that relies on ads as their revenue model, then expect to put up with ads. They're part of the deal.
Pretty much every AAA video game of a certain period would have been using scaleforms Flashplayer for their user interface.
No, the compiler maybe, Action Script maybe, but not the player. The player is entirely closed source and there is no open spec for the player. Or you need to show it to me.
> there was more than just the Adobe Flash Player as implementations.
Only Adobe's implementation could run all swf files. Scaleform was not an alternative flash player. Any attempt at creating an alternative and feature complete flash player failed.
Flash the tech is not open, at all.
So someone makes a game, and they use this very useful WASM library over here. Only that library exploits spectre or meltdown to steal data. Or maybe it just silently hoses your machine by targeting the new WebGL shaders? Or any myriad number of other things.
Let me be explicit. There are changes in WASM specifically made that render spectre and meldown mitigations useless. (ie-Browser makers put in spectre and meltdown mitigations, and changes in WASM allow WASM content to get around those mitigations.) Developers cheer the changes, because they make WASM more useful, and to be fair, browser mitigations of spectre and meltdown type bugs make WASM far less performant. But changes which render those mitigations useless are dangerous no matter what your opinion is on how useful WASM should be.
Edit: Should probably mention that the upcoming changes include threading and shared memory. Implemented in a way that enables CPU side channel attacks. (Probably because there is no other way to get threading and shared memory without everything slowing to a crawl, but still.)
Could you be more specific? I implemented Chrome's Spectre mitigations for WASM and I'm not sure what you are referring to.
> Should probably mention that the upcoming changes include threading and shared memory.
These only give you a high-resolution timer mechanism--which you have to build yourself and is possible in JS with SharedArrayBuffer before. So WASM is no worse in this respect.
Some time in the (near?) future those vulnerabilities will just be a footnote in some history book and having to support mitigations forever (due to backwards compatibility) probably isn't the best thing to encode into a standard.
I'm sure some intrepid security researcher will find some new Vulnerability of the Day which can also be exploited through wasm and then they will need to add mitigation to the standard yet again ad nauseam until it becomes some giant bloated unusable mess for which we'll need yet another standard.
The problem with Spectre is that it is a bit different. Array bounds class might be able to be patched in the long run, but even from the beginning, people have suspected that the other classes of spectre would be more slippery. And true to form, new spectre type variants continue to be discovered and disclosed by Intel even to this day. (SpectreRSB for instance.)
In short, it's not a simple patch that will fix all these strains of bugs out there. In light of that fact, browser makers have implemented mitigations at the application level which are a bit more heavy handed. But, as you can imagine, this is going to impact all content inside the browser. Which brings us to the WASM content, and the threading and shared memory changes. And you know the rest of the story from there.
I think so, and the managers at Adobe and Sun must be kicking themselves for not somehow getting their runtime more open, modular, and standardized now that we see write once run anywhere with a few system hooks is all we need.
Then again... It was a different world in the mid 2000s. The web standardization process? Ha, what was that?
On a side note, I'm seeing more articles pointing out that WASM runs in the JS VM. Doesn't negate the whole advantage of speed for WASM?
> managers at Adobe and Sun must be kicking themselves
Both tried. As I recall Sun were blocked by Microsoft, and Flash was bundled as standard with Netscape from about 2001 onwards. Steve Jobs killed that stone-dead when he point-blank refused to support it on iDevices.
Adobe AIR beat things like Electron and PhoneGap to market by years. IMHO the issue with Adobe is this insistence on 'open' still having various vary opinionated elements. Adobe Air for example had a lot of good ideas but still attempted to evangelize Flash and ActionScript. I _think_ MS is trying to pivot of that grave now with .NET Core. Time will tell if the Mono-to-Wasm or .NET Core Native projects have legs.
I was so very excited about Adobe Air and wrote a production application with it in 2009.
I _think_ a sweet spot for WASM data processing. The data visualization space should explode once I can with data in the browser at near native speed.
Sun thought that they had something like that with Java Apps ~20 years ago, except they forgot to make installation and UX compelling, and the memory requirements were unacceptable for the time.
For a while before being terminated, Flash got a native code backend.
Admittedly they could be clocked slightly higher if they were larger with better cooling, but they're by no means slow computers.
It basically means that WASM has the same safety/security model as the JS VM. Just like JS, it is compiled to native code (I'm simplifying a bit, of course) before being executed. However, where JS is one of the languages with the most complicated semantics around, which makes it really, really hard to compile efficiently, WASM has extremely simple semantics and is designed to be really, really easy to compile efficiently.
The WebAssembly version outperforms the asm.js version.
Over time you do more of your own thing and you or someone else splits these two pieces of code into three smaller ones. Like the LLVM backend that can be fed by a C or C++ frontend.
It runs on the JS sandbox, but it can not be efficiently emulated by the JS CPU. VM is an ambiguous term.
Browser developers are talking about running JS in the Wasm VM. That will probably be reasonable very soon.