> "That's the standard language for computer programmers around the world, so using it let's us build our own chips. And HTML 5 is very secure. Every component is linked on the data network, all speaking the same language. It's not a bunch of separate systems that somehow still manage to communicate."
HTML/CSS/JS is ever-changing, control-intensive, and memory-intensive so it's probably not a good candidate for hardware acceleration.
DOM level 3 core became a recommendation in April 2004. DOM4 went last call in 2015. The fundamentals- I feel- are quite fixed, although many auxiliary systems do change.
> control-intensive, and memory-intensive
latency to remote accelerators can be problematic for some control workloads. ideally the control plane can hopefully offload itself onto the accelerator too.
I don't see memory intensive as a barrier. The 8 vdom+ processors probably come with sizable multi-megabyte caches. Perhaps they could be early on-ram processor architectures? after all it seems they have a fixed function diff pipeline. icd also suggest that perhaps the hardware representation might be very effective at using low bit-depth encodings, saving gobs of memory. keep text offboard & encode attribute values via some columnar representation & this could be a high throughput HTMLElement slinger & differ!
E.g. AV1, HVEC, H264, etc etc. All of these either have or are about to have hardware acceleration. Why not JS?
The ideal "accelerator" for these kinds of jobs is a CPU with a big cache.
Video encoding has well defined control loops and data paths that don't arbitrarily interfere with each other, so it's a good candidate for custom hardware.
That is, you have a high bandwidth, highly parallel fast-path between framebuffer memory and functional units that compute FFTs and do motion vector operations, and a control plane that looks at a small handful variables in order to decide which data plane operations to schedule and how to glue together the final result.
To run JS, you need a pile of functional units and lots of memory, and data for every operation needs to be able to come from / go to anywhere in memory. That's... just a general purpose computer.
JS/CSS/HTML5 are not, they essentially have an open ended and infinite amount of branching and data dependency and I'm very skeptical a card could achieve much.
This is before we start talking about stuff like latency to the main CPU, CPUs are EXTREMELY fast in comparison to access to buses like ram, and especially PCI-E, I would not be the least bit surprised that even if some theoretical infinitely fast HTML5 accelerator card existed, it would still not be worth using due to latency of fetches from the card.
It's already not worth offloading things like cryptography to accelerator cards, and every major crypto algorithm was designed to run fast in hardware. And this is before we start talking about stuff like AES-NI.
Just for the sake of argument, I should point out that the bulk of the work done by JS/CSS/HTML involve primitive operations over a tree data structure. Conceptually, this paves the way to opportunities in hardware acceleration, similar to how the extensive use of polynomials in number crunching applications led to the addition of a fused multiply-add instruction set.
Well, that and the idea of permanently burning the, uh, unique design choices made by the web platform into hardware is a little horrifying.
However most of them worked by using the video-overlay feature on cards where the hardware video decoder injected its output directly to the GPU's output (after the framebuffer) via an internal header - or even injected themselves into the GPU's output VGA signal using a D-Sub-input on the back of the card.
For a very brief time in the late-1990s there were partial MPEG-2 decoder cards that hooked themselves into DirectShow to do the bulk operations needed for DCT and/or Motion Compensation but not rendering the entire MPEG scene - they'd feed their results back to the CPU rather than the GPU... IIRC.
The funny thing is that overlays are coming back (soon, I hope!) - not for performance reasons, but because using a compositing window manager like the DWM introduces an additional frame of latency, but if a foreground window is being displayed 1:1 on the desktop then the GPU can simply overlay it directly to the output signal and thus eliminate that frame of latency. Some Linux WMs support it already, and Microsoft said they're working on it.
It could be great though for smaller parts. If someone can make a super fast font renderer accelerator, it could help in general. Alternatively we could adopt the GPU accelerated one created for servo.
I decided no due to the 256MB of Emoji-Cache...
My sarcasm detector is broken...or maybe its just the web..
EDIT: http://mitsuhiko.pocoo.org/flask-pycon-2011.pdf ugh pdf
If you don’t believe me, disable hardware acceleration on your machine (force the video card to VESA or framebuffer mode or something), and try to read the news, use web apps, etc. Compare them to native 2D apps, which should still generally work just fine.
In my experience, a modern, headless 24 core xeon with 128Gb of ram and 2x10GBit nics can’t even use jenkins and jira comfortably at the same time.
I'm not sure if you're joking or not, but in the event you were being serious I should point out that the performance impact of doing a lot of requests is not due to the CPU but time wasted while waiting for the request to arrive.
You'd be hard pressed to find a hardware-based strategy for the client-side that would make servers send their replies faster.
And while the client waits for a reply, their CPU just idles.
Many of those requests are also running GPU code to fingerprint clients. Canvas and WebGL. To "prevent clearing cookies to bypass paywall" fraud and ban scrapers.
Neither making requests not transferring data around are CPU-bound activities.
> Many of those requests are also running GPU code to fingerprint clients. Canvas and WebGL. To "prevent clearing cookies to bypass paywall" fraud and ban scrapers.
Prior art (kind of): XML Accelerator XA35 - http://soasecure.com/xml-accelerator-xa35/
> The XML Accelerator XA35 is a highly efficient XML processing engine that makes use of purpose-built features such as optimized caches and dedicates SSL hardware to process XML at near wire-speed.
> The appliance can be used inline in the network topology, not as a coprocessor that hangs off a particular server. A popular use for the appliance is to receive XML responses from servers and transform them into HTML before forwarding the response to the client.
The whole thing is kind of "last century", but amusing to know that dedicated hardware exists for XML.
Sure it’d make coding harder but the user experience would probably be better.
Or maybe I’m just talking out of my ass here.
Only half so. It’s not that much harder.
I get it, sometimes you want to keep the syntax the same for future maintainability, while other times Ij just want to use something I already know that I know will work just as well.
I can't tell if that card would be a great thing or a terrible thing if it really existed. On the one hand, encoding decisions in hardware might slow down the pace at which the web shifts around. On the other hand, web page complexity would expand to fill the available processing power, so they would only be fast for the web devs and anyone else who has the expansion card.
For example that card graphic mentions Virtual DOM, which isn’t a technology. It’s bullshit that comes with the React framework.
Apple's Mac Pro, AirPods Pro or iPhone 12's webpages, Facebook Feed, etc etc....
High Quality Image with Animation and Jank Free Scrolling is still not done right ( or not even possible ) by any major tech companies in 2020. And that is just Web Pages, not even Web Apps.
And preferably doing so without my Quad Core MacBook Pro with GPU accelerated Browser ever warming up my lap.
you'd need a bunch of those $15 hdmi-in (to usb) adapters to use the chrome casts accelerators but the idea is very much there; hardware that does the web.
Don't get me wrong it's a great concept, I like the idea of portability everywhere, but I can't get past the fact it's basically just a stripped-down Chrome browser with the "app" effectively being plain-ole HTML/CSS/JS. It just seems pointless when you can run the exact software in the web browser you likely already have running, with less overhead to boot.
The music streaming service Tidal really highlighted some of these issues for me, and started my hate-train. Their desktop application is electron-based, which supports HiFi, as does Chrome. The crazy thing is Chrome is the only browser that supports HiFi, and has been this way since the service launched in 2014, despite countless requests from FF users to add support. If Tidal is going to spend 6 years ignoring everything but Chrome for the sake of their electron app, IMO other companies are going to follow suit and continue the march towards Internet Explorer 2.0
Electron is cancer. I hate it and I'm not backing down off that.
This isn’t necessarily true though. Electron gives you file system access (among other things). Most electron apps I’ve used at least do something that a browser cannot do. Though definitely not all.
Also, being able to alt/cmd + tab to they application you want is often convenient.
One alternative is PWA but no interaction with the OS since sandboxed and of course, different platforms == different browsers which more or less support for PWA, so not fit for a Git desktop client for instance.
however, they're messing up by making it a PCI-e card. They need to make it USB-C or with a lightning connector so that people can use it on laptops or mobile devices. No need to upgrade your phone when you can plug in this accelerator!
"It's one browser, Michael. What could it cost? 10 engineers?"
Partial implementation can exist as valid user agents, and most developers don’t even seem to realize you can have a compliant browser that doesn’t render to CSS 2.1 box model specifications.
I would still love to buy one though.
Obviously fake, could not possibly make React tolerable.
(PDF of a Servo talk from 2014)
I wonder if this could be one of ARM Macbook's killer features: a web browser with a Servo-like parallel engine, written for custom silicon with a number of small-ish, low-frequency ARM cores. Or would that be excessive?