Hacker News new | past | comments | ask | show | jobs | submit login
Gpu.js – GPU Accelerated JavaScript (gpu.rocks)
231 points by apo 55 days ago | hide | past | web | favorite | 113 comments



Warning: this hard locked my MBP. More detail after I’m back at my machine.

Edit: System details

Retina Mid-2015 15" - MJLT2LL/A - MacBookPro11,5 - A1398 - 2910

Firefox 65.0.1 (64-bit)

The hang occurred when running the benchmark with Texture Mode enabled. The spinning gear froze after a few seconds, then the display went black and the fan was maxed.

It was a pain in the ass to recover: my only option seemed to be to power down and power back up again, at which point OSX helpfully remembered the state my machine was in and dropped me back into the blacked-out display and maxed fan. Some combination of NVRAM reset, SPC reset and safe mode was required to get back in.


This is nuts. There’s no way a webpage (or app for that matter) should be able to crash an entire OS.

Something's gone very wrong with ours software.


You’re right, but it’s not by design, it’s just a little buggy and a lot complicated. There are still easy ways to lock a browser up with CPU code too. It’s harder to crash the OS, but it does happen.

There also shouldn’t be ways to crash the internet or a large scale redundant network service like the ones Google & Amazon have set up with a single microservice or database corruption or flaky cache or router, yet cascading failures are happening all the time and getting long post analysis write-ups featured on HN, even.

The main thing that’s “wrong” is complexity, but it’s also here to stay, there’s no going back, there’s just tightening up the sandboxes and hardening the APIs. GPU resource management is still a bit more raw than what the OS & CPU have, but it’s steadily improving every year. And aside from crashing sometimes, browsers have become very careful about GPU sandboxing for privacy.

In short, it has only gotten better over time, it won’t be long before a webpage really can’t crash the OS just by using the GPU.


If you need the GPU within the browser, then the user should be asking why?

Why should they should provide such expensive resources to any third party?


If you're doing any kind of graphics work, the GPU is actually the less "expensive resource" as it will do it faster and more efficiently.

Any kind of <canvas> work, graphs/charts, browser games, animations, and video all will perform better with some GPU assistance. The alternative is run everything on a CPU and get a more limited experience and have your CPU maxed out to 100% a lot more.


That’s a strange framing, I’m not sure I understand what you mean. The reason for WebGL is much faster graphics for me, I don’t ask why because I want GPU in the browser. Which third party are you talking about? What does the expense of a component have to do with anything? If you’re assuming a malicious GPU computation done without the user’s intent, maybe state that assumption, but that’s not what the parent thread was talking about. I’m certain that if GPU abuse were to become a thing, the browsers will immediately deny default access and request user permission, just like with audio. In the mean time, you can currently turn off GPU access in your browser settings if you’re afraid of something.


The context here is the GPU js library, which is a general purpose computing using the GPU.

Does this actually build on top of WebGL?


Yes, GPU.js is built on top of WebGL. So this exposes no new functionality, only new convenience. And yes, the context is GPU.js, general purpose compute for the GPU. That doesn't mean you're "providing" resources to any third party, right? The primary reason to want general purpose GPU compute in the browser are the same as the reasons to want GPU graphics - to make my own experience faster.


Thanks for clarifying.

The security risks of WebGL has been discussed for years, so as concerning as it is that's not what I'm interested in.

If you as the developer of a web application need the power of a GPU, then aside from graphics my question is why?

The power draw of GPUs isn't trivial and the vast majority of browsers are hand-held devices.

Aside from minimalist sites like hacker news, websites already drain my phone's battery very quickly.

If it's meant for desktops, then to what end?


I guess there's two different topics in there.

First, I agree completely that the web has insane amounts of bloat relative to the size of the content you request. Most people here would agree; frameworks and ads and analytics are crazy. This is true, but irrelevant to GPU.js.

Second, GPGPU applications numerous. There are several listed here in the comments and on the GPU.js site, e.g., ray tracing, matrix multiplication. A really obvious one for HN would be neural network traing and/or inference, or any other optimization technique. It's useful for physics sims, video editing, and image processing. Here's a longer list: https://en.wikipedia.org/wiki/General-purpose_computing_on_g...

That's not to mention that it's just fun in the spirit of hacking to figure out how to use your systems in weird ways. The cool part about GPU.js is the GPU access with transparent CPU fallback - you get to write GPU-CPU agnostic code in javascript. It's neat that they implemented in javascript a way to cross compile javascript to a shader.

You're right that the power draw of a GPU isn't trivial, but the watts per megaflop is lower than CPU, for well-crafted highly parallel compute applications, even on mobile devices. I don't imagine using GPU.js on a phone, my personal assumption is that GPU.js is mostly meant for developers playing around on desktops right now. The benchmark specs on the GPU.js page describe a desktop system.


GPU drivers are insane complicated beasts. While doing my advanced graphics, OpenGL with my nvidia card has locked my machine many times.

WebGL propagates that. There’s nothing chrome can do, since those webgl calls are essentially proxies to the GPU which is also used to display the rest of OS

Most GPUs don’t have a strong sandbox, and things blow up.


Sure, but I mean, exposing almost raw GPU calls to the open web is… nuts. There must be a less powerful, safer subset.


Browsers do a lot of blacklisting of drivers with known problems.

Things are slowly improving, especially as browsers themselves, especially Firefox, steadily do more GPU stuff (putting pressure on the driver makers to fix things), but there’s a reason why it’s been very rare for non-game UIs to target GPUs for rendering: GPU drivers are atrocious. Even if they don’t crash, the number of absolutely fundamental and critical bugs that they have is very impressive. As an example of a fundamental thing being broken, try https://github.com/tomaka/winit/pull/332#issuecomment-341383... and later on down in the thread where a driver update fixed it.


As another example, here is the long list of blacklisted drivers and cards in Chrome: https://cs.chromium.org/chromium/src/gpu/config/software_ren...


The list of driver issues that Mozilla has run into while developing webrender is also impressively long: https://github.com/servo/webrender/wiki/Driver-issues


(Actually, I misrecalled the cited example: the driver update fixed that bug, but introduced another far worse one.)


There is no useful subset that doesn't have the same problem (see my longer response). We did an investigation back when making O3D (before WebGL) if there was a smaller subset that was both performant and didn't have the issue. The answer was no. It does not require shaders to trigger the GPU issue mentioned in my other comment and there's no performant way to check if geometry submitted to some fixed pipeline is going to trigger the "too long" issue or not.


Pixel and vertex and other shaders that run on the GPU are....mostly benign. They can’t be used to take down a computer unless there is a bug in the hardware or graphics card driver, the latter being very common unfortunately.

Shader language is pretty simple, there is no way to really subset it. The only safe alternative is not executing user provided shader code on the GPU, but that just means making it completely unavailable.


Earlier versions of WebGL were less powerful, but that made it hard to do anything useful with them. So now, we have this. Vendors are busy adding a Vulkan-style web API now so we'll have even more fun


Blame Apple. They have the worst graphics stack when it comes to stuff like this.

Most GPUs are not pre-emptable (maybe that's changing?). Because they are not pre-emptable if you give them something to do they will do it until finished. There is no interrupting them, saving state, switching to something else like with a CPU.

Microsoft realized this and so Microsoft built into the OS a timeout. If the GPU is given something to do and doesn't come back in a few seconds then they reset the GPU (like shut the power off and turn it back on, or rather send a PCI reset). You need to be aware of this in Windows programming if you have data you need on the GPU. Most games don't and can just re-upload their data if told their data was ejected but some GPGPU calculation will at least get told "sorry, GPU reset, start over"

Apple didn't implement anything like that in OSX (now MacOS) for the longest time. The entire OS is built around the idea that all apps are well behaved (LOL). If a single app gives the GPU 30minutes of work to then the GPU is going to do 30 minutes of work and your option was only hold the power button.

They tried retrofitting in a timeout but AFAICT there's just too many parts of the OS that expect no one will ever hog the GPU and it filters back to CPU land when the CPU tries to do something (queue drawing some windows) and the GPU doesn't respond for .5 a second, 1 second, 20 seconds, etc and stuff piles up etc...

Linux had (has?) exactly the same problem. I don't know how much progress has been made on either MacOS nor Linux to deal with this.

Now comes WebGL which you guys won't like but basically any page can give your GPU something that takes a few minutes to run and your machine (if it's not Windows) may crash. The mitigating factor is it doesn't help the bad guys. It's not like they can steal your info. The most they can do is convince you never to visit their site again so it's a mostly self correcting problem. The hard spots are if the person making the site was targeting a top end GPU and you visit with a low-end GPU.

There are really only 2 real solutions. The OSes need to handle the fact that GPUs are non-preemptable or we need preemptable GPUs.

Some will argue get rid of WebGL. I'd argue the issue is mostly self correcting and that Google Maps is pretty awesome and it gets that awesomeness from being able to render with the GPU. There's plenty of other 2D and 3D visualizations WebGL enables. So I'm happy to live with the self correcting problem for the benefits while I wait for pre-emptable GPUs for the real fix.

This is actually one place where I'm surprised there are cloud GPUs since they'd have the same issue. I've never used one but they must have some way of being reset. They must also not be shared? Or you just have to accept that some other process on the machine can completely hog the machine.


>Linux had (has?) exactly the same problem. I don't know how much progress has been made on either MacOS nor Linux to deal with this.

All my AMD GPUs have been issued GPU resets for years. I know that in 2013 my Radeon 5870 (the card I had at the time) was issued soft resets, which was logged in dmesg as a success. The problem with that is that whatever program caused the GPU lockup would continue to run (unless it crashes, which is frequently the case) until the kernel had reset the GPU so many times, the only option left is a hard reset.

On Windows, the way GPU lockups are dealt with is also by resetting the GPU, and it's the same there too, if it keeps locking up several times within a certain time span the kernel panics and you either get a BSOD or a complete system lockup.



wouldn't cloud providers just bill you to death if you hogged their machines?


As an MBP user, it's kind of funny the most dangerous thing I can do with my crazy expensive computer is use 100% of its resources in a sandbox.


Well, you could load random kernel extensions you find on the internet…


That's the spirit.


The benchmark in "Texture Mode" just shot my system as well. (Older MacPro, Firefox 65.0.1) No visual response anymore / hard system reset required.

This is why computing on the GPU with code loaded by a web page isn't a good idea.


It's amazing how quickly my attitude toward the idea of GPU-accelerated JavaScript flipped from "awesome idea, let's check out the comments!" to "how about no."


Idea is great, GPU OSes (called firmware to disguise complexity) are about as good as Windows 95 but much more complex.

No virtualization yet, very little memory safety, expected use case is single big task.


It locked up a tab on my unbelievably-low-spec Acer Chromebook 11, but eventually came back to life to tell me it's 5.53 times faster than the CPU.


I ran this test on an iPhone 7 with a battery health at 99%. It made the battery drop 3% in about 30 seconds.

I don’t know what else it is, but it is a great DDoS tool for batteries.


*iPhones

They seem to have the highest computing power to battery size ratio among mobile devices in general.

One one hand it's great because they're powerful, on the other this here happens.

Around Christmas there was this sand game which was really neat, but sucked my SO's battery dry in moments going a stable 40fps.

My Galaxy S8 ran it slower and choppier, but considerably longer.


Same on Ubuntu 18.10, Firefox 65.0.1 (64-bit) on NVIDIA Quadro P400.


Worked fine for me.

- macOS 10.14.3 (18D109)

- MacBook Pro 15-inch 2017 (Intel HD Graphics 630 1536 MB + Radeon Pro 555 2048 MB)

- Chrome 72.0.3626.96 (Official Build) (64-bit)

The page definitely got more choppy with Texture Mode enabled (guessing the perf delta is higher, so they run a more intensive test to get significant results. I'd test on Firefox, but currently at work, so I don't want to risk it haha.

FireFox 65 seems to be the common denominator I see for these crashes.


Same. MBP2018/FF65. Hard reboot needed to get back up. Interestingly when it soft-crashed FF had a dangling process which wouldn't let it me open the app again (~"Only one FF can be running").

Guess I was well overdue for a reboot anyway...


Worked fine on my MacBook Pro (Retina, 13-inch, Early 2015)–apparently the GPU was 367.95x faster. Might be something with your discrete graphics card, or your use of Firefox?

Edit: works fine for me in Firefox Nightly.


> my MBP

In my experience, Apple's OpenGL drivers are incredibly unstable. On Windows I'll often see incorrect results, but generally the drivers have not crashed for normal inputs. When I've run my (admittedly kinda avant-garde) shaders on iOS and OSX devices, there's a good chance that the whole system will halt at some point. I honestly don't know how they have problems like this.


Mine too. My Macbook Pro has a discrete GPU but using it for gaming results in all kinds of weird behaviour


Same. MPB 2012/FF65 Not the first time I notice that macOS has a lot of troubles with big IOs (disk, network, and now GPU)


I've recently hard blocked my windows machine by trying to be smart with a filters on a complex svg. Can happen


Worked ok on my iPhone Xs.

GPU is x105 the speed of CPU according to this benchmark.


3-6 times faster on my iPhone X in the Chrome in texture mode, 1.2 times faster in default mode. 11 times faster in default mode in Safari, 5.5 times in texture mode. Results above are for battery-powered device, but repeating tests with charger plugged in didn't help to Chrome, but improved result for Safari up to 60 times.

It looks like results vary significantly from time to time.


Chrome on iOS is just a UI around Safari's rendering engine, per Apple rules. I'm going to hazard a guess you were just seeing normal variation between tests.


I am aware of that. I suspect Safari somehow manages to consistently get more GPU priority than Safari-based Chrome.

¯\_(ツ)_/¯


same on my gentoo with FF and nvidia proprietary drivers. A simple reboot was enough though


I love this comment in the code: https://github.com/gpujs/gpu.js/blob/develop/src/backend/web...

    // Here be dragons!
    // DO NOT OPTIMIZE THIS CODE
    // YOU WILL BREAK SOMETHING ON SOMEBODY\'S MACHINE
    // LEAVE IT AS IT IS, LEST YOU WASTE YOUR OWN TIME


I don't love it because let's say I really did want to change it, it doesn't tell me how much effort has previously gone into it and what happened. Maybe it means 10h have been spent in vain maybe 10000h. If I want to try my luck that's important.


Ah yes, ye olde IEEE float to int conversion for ancient drivers that don't have good GPU integer support.


I've built an animated raytracer with an early version of this. http://raytracer.crypt.sg/


How does one go about learning web gl



How can I block this so that websites aren't mining crypto on my system?

Is it a matter of blocking js entirely or is there a way to achieve a bit finer granularity?


This paired with the recent web workers exploit [1] shows the direction malicious and pwned sites are very quickly moving. I think a good solution would be to implement resource control on domain level in browsers and give the user the ability to set the preferred values.

[1] https://www.zdnet.com/article/new-browser-attack-lets-hacker...


That worker exploit doesn't make any sense, as written:

> Technically, Service Workers are an update to an older API called Web Workers. However, unlike web workers, a service worker, once registered and activated, can live and run in the page's background, without requiring the user to continue browsing through the site that loaded the service worker.

This isn't true. A service worker will be automatically shut down when it isn't attached to a browsing context. Not immediately, but it definitely doesn't support what the article describes.


This is covered in the article.

> The attack routine consists of registering a service worker [...] and then abusing the Service Worker SyncManager interface to keep the service worker alive after the user navigates away.


I use Firefox + uMatrix to selectively enable javascript for specific sites; otherwise javascript is strictly disabled for most sites.


uMatrix also allows you to whitelist web workers, which are a typical component for js crypto mining


Which column do those fall under? I see cookie/css/image/media/script/xhr/frame/other. I assumed they're under web workers but, me not being a web developer, am not sure.

I absolutely 100% do not want web workers (service workers?) enabled in any way shape or form. Any website which uses one is, in my opinion, not only extremely poorly designed but uses anti-patterns.


> I absolutely 100% do not want web workers (service workers?) enabled in any way shape or form. Any website which uses one is, in my opinion, not only extremely poorly designed but uses anti-patterns.

that's kind of an unfair statement. web workers are extremely useful for moving computation away from the (single) gui thread, allowing the page to load and process large amounts of data without locking the main browser thread. what you're saying is equivalent to saying "any program that uses threads is poorly designed and uses anti-patterns".

while, like threads, there can be bad uses for them, they can be used extremely effectively for good.


> what you're saying is equivalent to saying "any program that uses threads is poorly designed and uses anti-patterns".

No, what I'm saying is that I want my web browser to have a light footprint. The browser from 10 years ago works perfectly fine: it rendered and displayed text and images and didn't require hundreds of megabytes or even gigabytes of disk space. The anti-pattern is the idea that so many new things are "needed".


> No, what I'm saying is that I want my web browser to have a light footprint.

but that's not what you said (I'm not trying to be antagonistic) - you called using a specific part of the browser an anti-pattern and poorly designed.

> The browser from 10 years ago works perfectly fine: it rendered and displayed text and images and didn't require hundreds of megabytes or even gigabytes of disk space.

and alternately, the ed editor also works perfectly fine: it edited files, and is 57k (likely more than it needs to be). but for some reason most people don't use it very often, instead opting for vim, emacs, atom, or even eclipse.

needing new things isn't an anti-pattern, it's simply the growth of the environment.


> but that's not what you said (I'm not trying to be antagonistic) - you called using a specific part of the browser an anti-pattern and poorly designed.

You're right. Where do you think the statements contradict each other? I don't think they do.

> and alternately, the ed editor also works perfectly fine: it edited files, and is 57k (likely more than it needs to be). but for some reason most people don't use it very often, instead opting for vim, emacs, atom, or even eclipse.

As a matter of fact, I use vim a lot.

> needing new things isn't an anti-pattern, it's simply the growth of the environment.

From what problem did this need arise?


> As a matter of fact, I use vim a lot.

which is miles ahead of ed (and vi for that matter), adding a ton of extra features when ed works perfectly well for editing files. so why use vim?

> From what problem did this need arise?

from the same problem that threads in general arose - some part of the program blocking another part of the program, making for a bad experience. javascript runs in a single thread for the whole page - so anything that is blocking in any way, due to processing data, dealing with loading data, rendering any sort of 3d for you, mixing audio, really anything that would give you any sort of interactive experience could block the whole page from being responsive.

you might as well as from what problem threads arose - same answer. it's true that we had fairly primitive cooperative multi-threading in c using lngjmp and setjmp, but it's still a lot better to fire up another thread when doing tasks like the above, especially when dealing with a gui is also involved, such as with the web.

I'm happy to provide concrete examples, if you want.


> so why use vim?

vim is lightweight and miles ahead of ed. It's easier for me, as a user, to use vim than to use ed. It's simpler and loads faster than Notepad++ or Code.

> from the same problem that threads in general arose - some part of the program blocking another part of the program, making for a bad experience.

While I concur that a web site not loading makes for a bad experience, I do diverge on the solution.

> javascript runs in a single thread for the whole page - so anything that is blocking in any way, due to processing data, dealing with loading data, rendering any sort of 3d for you, mixing audio, really anything that would give you any sort of interactive experience could block the whole page from being responsive.

Solution: don't use javascript.

> you might as well as from what problem threads arose - same answer. it's true that we had fairly primitive cooperative multi-threading in c using lngjmp and setjmp, but it's still a lot better to fire up another thread when doing tasks like the above, especially when dealing with a gui is also involved, such as with the web

Honestly I don't see where the problem is a user problem. It's a problem, for sure, but I believe it rests solely in the hands of the developer and web site's servers.

> I'm happy to provide concrete examples, if you want.

I would like that, if you're willing to also learn how I think some of the examples might be done differently.


(we're too many levels deep on hn, so replying to myself - this may end up going beyond what would be a good discussion on hn though)

> vim is lightweight and miles ahead of ed.

and modern browsers are miles ahead of where they were 10 years ago, giving us better text rendering, better image rendering, much better layout, and better interactivity.

> Solution: don't use javascript.

for some things, this is acceptable, but for some applications this simply does not make sense. loading a static page, probably shouldn't be any javascript.

viewing a map, where you can zoom in and out, as well as pan around should probably use javascript. I'd be interested in hearing how you would solve that use case without using javascript.

> Honestly I don't see where the problem is a user problem. It's a problem, for sure, but I believe it rests solely in the hands of the developer and web site's servers. > > I'm happy to provide concrete examples, if you want. > I would like that, if you're willing to also learn how I think some of the examples might be done differently.

here's an example of something that I did real-world with web workers:

a city's building footprints, loaded from a feature service (~150mb as JSON), processed by a user-specific variable (such as age of building, square feet, building usage: store, warehouse, residential) where each building is rendered on an interactive map in different colors based on the visualization type. this can be done server-side, with 150mb of data rendered onto either vector or raster images, each processing tiles for any location, or it can be rendered in browser. without using web workers, expect a 5 minute freeze while everything loads, gets processed, and rendered into the dom, with the experience repeating every time criteria is changed or the map is moved. with web workers, it is near real-time, with no freeze. if this were rendered on the server, it would be quite a lot of server cpu usage to render on the fly, or a huge amount of image space to render ahead of time, plus very large amounts of data. add another layer on top of it, and you're looking at doubling that, along with huge data transfer, which would not make for a very good user experience.

I'd be interested in hearing an alternate way to provide this, while keeping the data usage, server disk usage, and cpu usage (server and client) to a minimum, while still providing a compelling user experience to the user.


> (we're too many levels deep on hn, so replying to myself - this may end up going beyond what would be a good discussion on hn though)

Where would you like to move the conversation?

> viewing a map, where you can zoom in and out, as well as pan around should probably use javascript. I'd be interested in hearing how you would solve that use case without using javascript.

Using a browser-only solution? First, I'd ensure that navigation buttons (forms) are available which could load a new page if the browser doesn't support features. Then, I'd augment HTML.

First, declare an image tag. <img>. Then add an attribute indicating that it's a sub-view of a larger logical image which can be stitched together (panning), and another attribute indicating it can be zoomed in/out; the attributes should include urls to retrieve new/additional images. If the user pans or zooms too far, the browser can either deny the change that the user tried to do (eg, the image stops panning/zooming) and/or present information to the user indicating why it can't fulfill the request.

This is, in my opinion, a superior solution since it explains to the browser exactly what it is that you want to allow the user to do. Then we don't need to require the user to run (unaudited, unsigned, untrusted) javascript code.

> a city's building footprints, loaded from a feature service (~150mb as JSON), processed by a user-specific variable (such as age of building, square feet, building usage: store, warehouse, residential) where each building is rendered on an interactive map in different colors based on the visualization type.

That sounds cool other than (I assume) 150mb being downloaded onto the user's device before anything can be rendered. I'd build a native application or a server-side application. I also hope that 150mb of data isn't sensitive to whether or not a user inspects it (no secrets or privileged information, for example).

> it can be rendered in browser. without using web workers, expect a 5 minute freeze while everything loads, gets processed, and rendered into the dom, with the experience repeating every time criteria is changed or the map is moved.

First: I think it's audacious to assume that every machine has a lot of CPU resources such that adding web workers magically makes things faster. If a user's device has a single core, then adding web worker's isn't going to make anything magically faster unless the only reason it's slow in the first place is from synchronous communication and execution.

Second: exactly how big is this DOM? When I think of a DOM, I think of a page or two or maybe three at the most. If so much data is being packed into the DOM that it takes minutes for the browser to render it then the DOM is being used for too much. Render an image of it on the server, calculate hotspots that a user could click on, and send the image and hot-spot regions to the client using an HTML map tag. Server then updates the image and sends a new one to the client.

Third: become a VNC server, start a session, and render it in the client using the same technique described above. Speaking from experience, if VNC is less resource intensive and faster then something is going wrong in the development pipeline.

> if this were rendered on the server, it would be quite a lot of server cpu usage to render on the fly

That's a problem... why? You'd rather push the hardware cost onto your user, I take it? I'd rather keep users' and consumer's buy-in costs as low as possible.

> or a huge amount of image space to render ahead of time, plus very large amounts of data. add another layer on top of it, and you're looking at doubling that, along with huge data transfer, which would not make for a very good user experience.

Have you measured it?

Also isn't 150mb considered a very large amount of data? Not everyone has awesome internet connections.

> I'd be interested in hearing an alternate way to provide this, while keeping the data usage, server disk usage, and cpu usage (server and client) to a minimum, while still providing a compelling user experience to the user.

Let me try to understand your project: you want you have an interactive map (ala Google Maps) with an overlay whose color is dynamic based on user selection?

It sounds like it could be done by using a regular HTML form to load images newly-rendered by the server. An added bonus would be allowing the user to navigate forward and backward among their selections using the browser's native navigation features.

Exactly why is it so expensive to render on the server? Exactly how much larger are these map images with data overlaid compared to the original map image? I would think that, generally, when you replace high-color-range data (eg, satellite images) with flat colors (overlays), you'd typically reduce image size (flat colors being compressible); regular map images (plain background, roads and features in solid colors, etc) are pretty small. JPGs and PNGs aren't very expensive (image size) at all compared to that 150mb. I would be extremely surprised if the user was pulling more than 150mb of images for a typical session navigating around a map and making analysis colorization selections.

If your server can't perform with basic image manipulation (take image, overlay a color, return result) then your server software and/or hardware needs some better developer/hardware resources assigned.

Regardless, it sounds like the issue you're trying to get at is the dynamic overlay. Again, I'm not a web developer, but doesn't HTML and CSS already support overlaying images on top of each other? How expensive do you think it would be to serve the underlying map images and, separately, serve pre-rendered overlay images?

I'd imagine the overlay images would be a single color so they should be really small. Not only that, but you can pre-render each building's footprint in black-and-white. Then when serving the pre-rendered image, change the color index/table/palette so that the foreground (black? white? doesn't matter) is whatever color you want to serve. I've not done image manipulation for about 15 years but GIF sounds perfect for this. Then the overlay just needs to go to the exact same coordinate as the underlying map image.

And, hell, just because I like to hate on javascript: let's go back to augmenting HTML. In the form tag, add an attribute to indicate that submitting the form should merely refresh a particular image with a specific name attribute. Then, when submitting the form to change the stats being shown, refresh overlay images only.


going to cut this one fairly short, as I think we're hitting the end of the discussion:

> Using a browser-only solution? First, I'd ensure that navigation buttons (forms) are available which could load a new page if the browser doesn't support features. Then, I'd augment HTML.

which doesn't give the user a very compelling solution, it gives them a static map; which definitely has its uses, but feature-wise isn't something that people would really like to use: we're a long way from Mapquest, and it's not a paradigm that users are interested in at this point, which is why browsers have expanded functionality to include javascript, and css.

> Have you measured it?

yes: each tile is 25k at low resolution, 100k at retina, times 15 (for zoom levels), 20 tiles for your average web view of a map. that's 2mb per zoom level. adding a single layer of information to that would add 2mb per zoom level. take the 3 possible options that I mentioned and you're at 6mb. add the ability to pan for a small city, and you're looking at ~2000 tiles, or 200mb (or 600mb to account for each option). add another layer, such as population density, and you're doubling it per layer added. add the ability to change the opacity between layers, and you're adding up to 255 time more data. that's significant, as we've now reached 153gb without adding an additional layer to give me something to look at while keeping that functionality. now store that, for each person just in case they come back or render it real-time.

>Also isn't 150mb considered a very large amount of data? Not everyone has awesome internet connections.

I'm on a 1.5mbit connection with 700ms of latency on a sunny day, 150mb of data for that type of application is much preferred to rendered images being delivered to me.

> If your server can't perform with basic image manipulation (take image, overlay a color, return result) then your server software and/or hardware needs some better developer/hardware resources assigned.

at scale. we altered map tiles real-time no problem, but it still made sense to cache them instead of eating up the cpu time for each request (in that use-case, all users got the same modification, but if we were to render all changes to the map on the fly, instead of minor changes in the browser, it would not have made sense).

> Third: become a VNC server, start a session, and render it in the client using the same technique described above. Speaking from experience, if VNC is less resource intensive and faster then something is going wrong in the development pipeline.

and then it becomes inaccessible to the majority of users. same with requiring the user to download an application to view what should really be a web page (could you imagine having an app for every web page you want to visit? that would just be silly).

I'm sure we can dive down further, but I'm not sure it's worth it - there are several problems that having javascript and a browser solve, you're welcome to attempt them without leveraging either javascript or the browser, but I'll happily continue to use web pages that are delightful to use, and leverage as many technologies as possible. I remember bbs's, and text-based interactive telnet sessions, as well as curses, and I think I prefer the modern web.

(quickly editing to add a comment about using css and html and just "render it that way" - which is what is occurring, but the amount of data available [yes, it's all public data] to give a compelling data-driven experience to the user still needs to be dealt with - that's where web workers come in, putting those images layered on top of the tiles, across 500,000 buildings takes time to process whether you're using javascript or some as-yet-to-be-invented way to composite those images in the browser: it takes time and it is better to render them in the background in another thread than it is to lock the browser for 20-45 seconds while it renders - this is progressive enhancement)


Website != Web application. The "anti-patterns" are super useful to those of us who are using all of them to create functional cross-platform applications ;) I feel that there's a disconnect between website devs and webapp devs, and that the latter win when it comes to adding new functionality. The people who want to use the web as a generic platform are the ones that are turning it into an operating system by itself, which by nature is going to cause bloat, but people still think of it as "hey, it's 'just' a web browser!" when in reality they're becoming almost another whole layer of abstraction that allows arbitrary programs to run on any OS.

I agree that the adoption of new unnecessary technologies by websites that don't need them makes for a frustrating experience. Like, when I go to a small-town newspaper's website to read an article, I have to wait seconds for the custom font to load, elements pop in and out of existence as third party advertiser code loads, a video somewhere starts autoplaying, I have to click through a GDPR consent form, a cookies-ok form, a "subscribe with your email" screenjacker, social media buttons appearing reflow the text, ......... yeah it's BAD.

I think the issues we encounter come from the wheel-reinvention and the fact that web counterparts of existing technology tend to eschew things like security and scheduling in order to quickly catch up.


Maybe, this is exactly what a user doesn't want. Maybe, we should have a confirmation banner (including an estimate of the CPU load), like for getMedia, autoplay, etc?


> Any website which uses one is, in my opinion, not only extremely poorly designed but uses anti-patterns.

I'd be curious to know how you come to that conclusion, since web workers are primarily used to shift heavy computation off the main thread and deliver a better user experience. And service workers exist to make sites available offline. I'm unsure how either is an "anti-pattern".


Exactly what heavy computation does a web site need to perform?

If you have heavy computations to perform and display then perhaps it's time to consider writing a native application.

If you need to work offline then perhaps it's time to consider writing a native application.


This is an extremely short-sighted view of the web. There are many advantages to web apps vs native, the biggest of which is that I can access the same app with the same configuration from (hypothetically) any browser on any device. It is also (generally) easier to develop a web app than a native app, all else being equal.

Personally, there are plenty of services I would never use if they required me to install a native app.


In theory, the internet should be a place where our conflicting opinions about how to use our hardware can coexist. Unfortunately, the internet is slowly evolving where that's not the case.

If we want to turn the web into native apps, that's fine. Give me the same amount of control over the web that I have over native apps.

With a native app, I can control exactly how much CPU/disk/network it uses. I can have the app re/start automatically and I can work with it natively in literally whatever language I choose. Not only that but the web browser typically does a piss-poor job of isolation between sites.

With a web app, everything is more difficult (if not nigh-on impossible). And what do I, as the end user, actually gain in the end? Nothing that I didn't have ten years ago with far less complexity.

Ultimately at the end of the day it's my device. Some random third party person should not be permitted to use my device to execute or store whatever they want just because I randomly, accidentally, or was coerced into clicking on a malicious URL.


Why, though? Let's say I'm making a game. Browsers web workers and WebGL. Why shouldn't I target that universal platform that will allow people to play on Macs, PCs, Linux and all major mobile platforms?

It just seems very odd to draw lines in the sand that don't need to be there. How is any of it bad design or an "anti-pattern"?


It isn't a column. It's where the mixed content and other flags are (The vertical strip of 3 dots).

You can use the "*" scope and enable "Forbid web workers".

Be warned that this will break Google's captcha.


Perfect, thanks!


Isn't it sad how that's one of the first things popping to mind when we talk about the web platform? I wonder if Berners-Lee ever anticipated how quick economic interests would ruin everything.


Webminers normally don't use GPUs. They target algorithms like CryptoNight which are suitable for CPUs.


*yet


Firefox manages to be both significantly slower for CPU benchmark and significantly faster for GPU, compared to Chrome.

The former gets me wondering what is going on with the javascript. Sure, v8 is usually faster than Firefox, but not 8x faster.

Chrome GPU does it in 0.053s, where Firefox is hitting 0.005s, a tenth of the time.

This is on a 2015 Mac Pro.


Chrome's GPU sandboxing is more aggressive, so it's probably performing extra copies. GPU round-trips (think synchronous operations like 'copy the result from the GPU back to CPU memory) also end up being slower due to the delays introduced by bouncing things between the page and the GPU sandbox.


This is anecdotal, but when I built a JS image segmentation tool I found that FF was significantly faster than Chrome when working with the lower level stuff like ArrayBuffer, etc. I assume they are using a lot of this here since they are probably dealing with the WebGL API which needs to receive data as a buffer.


Maybe because FF started optimizing for asm.js earlier? Just an ignorant guess.


Chrome was always significantly slower with typed arrays. I remember early benchmarks, where array buffers were actually slower than plain arrays (which also aligned with my own tests).


Isn’t this due to security and process isolation differences? As in chrome does more to be secure than Firefox as far as how system resources are accessed via sandboxing?


I used this for a simple implementation of Conway's Game of Life: https://turdnagel.com/cells/


I've used this for a project and works pretty well. What's pretty incredible is that it was actually made a hackathon in Singapore. My main gripe I had this the debugging errors is very difficult if they happen in the graphics portion. Sometimes you will have to switch in the CPU mode which works differently the GPU mode, which means the results can be different for unknown reasons. Another annoyance (and this is more to do with web standards than anything) is that because a Canvas data is represented as RGBA in a single array any conversions to matrix form are very slow because any dimension past a certain size fails in GPU.js.


It would be interesting to compare some calculations between Gpu.js and https://js.tensorflow.org/


Caution: this just hung Firefox for a ~30s and then my MBP rebooted.


Same here.


Why do we need such hacks to access basic compute infrastructure?


The better question is why would you care? GPGPU has nearly no use outside of HPC applications, and nobody in that world is going to bother trying to make their supercomputer run Chrome.

Otherwise for anything "GPGPU" in usage it tends to really just be "GPU" - like applying filters to images. That's just running a shader on a texture, it's something you can do quite naturally in WebGL already.

GPGPU had a lot of buzz a few years back, but it's basically just a niche usage these days.


GPGPU is definitely not having "no use outside of HPC". Almost any modern game engine that you use nowadays use GPGPU in some ways (for AI, Physics, some terrain processing, ...). GPGPU is also insanely popular in modern AI these days. Almost any AI framework has backend with CUDA or OpenCL. A lot of Offline renderer also use GPGPU nowadays.

And this is just a few example that come to mind. The number of application that benefit from high-latency, high-throughput computations are plenty. Why do you think NVidia is still pushing and updating CUDA ?

> Otherwise for anything "GPGPU" in usage it tends to really just be "GPU"

It's like saying most turing-complete usage tend to be just "CPU". You can absolutely do GPGPU with just "normal" GPU usage. After-all, what is an array of data if not a texture ? A lot of processing are implemented this way: A texture and a pixel shader. But it doesn't mean that it is not GPGPU. GPGPU means that the GPU doesn't limit us to displaying polygons on the screen.


> Almost any modern game engine that you use nowadays use GPGPU in some ways (for AI, Physics)

No, they don't. Those are failed applications. The round-trip time kills it, along with the GPU being generally over-subscribed in the first place while the CPU sits idle.

For some stuff like terrain sure, which is why compute shaders exist in directx & vulkan. But it's a rather different usage in practice, and a very different "how you program with it".

> Why do you think NVidia is still pushing and updating CUDA ?

Because they still sell Tesla cards to the HPC market? Have you not noticed that the GTX cards massively cut CUDA capability because it didn't help games?

> GPGPU means that the GPU doesn't limit us to displaying polygons on the screen.

OK but that's clearly not what I was talking about? GPGPU in this context was "run non-graphics math written in JS on the GPU". Similar to something like CUDA. Not "here's an actual pixel shader, run it in WebGL". The how matters here.

> After-all, what is an array of data if not a texture ?

A texture is an array of color channels, and often swizzled by the driver. They are not equivalent to a generic array of data, although if you squint you can get close treating them as similar.

But textures are treated differently, which is why actual GPGPU systems exist, because the hack of "treat a texture as an array of data" doesn't really work well.


Security, not only against bad actors but incompetent programmers having access to the inner workings of other people's machines. The idea is that nothing should normally need to access anything it doesn't actually require; so, in circumstances like this, developers need to go through a few extra hoops to help them engage in safest practices and not blow up other people's machines.

Plus, different hardware, different abstractions, and the web has been relatively slow (and rightfully so) to open up access to this level of the computer until fairly recently.

Lastly, JavaScript isn't batteries-included, so someone had to come up with this, and apparently it took until today for us to see the fruit of that labour.


Compiling is a hack now?


GLSL isn't that much different syntax that Javascript. If you write it directly you get the fun of using types. And WebGL2's support of GLSL ES 3.0 brings features that make it much more handy for compute operations like bitwise operators. It's really not that complicated to write a compute function without any kind of library. Just throw up 2 triangles that cover the output and scale your canvas to the match the number of pixels you want to each 32 bits of output.

See my proof of work generator for the Nano currency that implements the Blake2B hash using these techniques: https://github.com/numtel/nano-webgl-pow

Send data in as an array of integers on a uniform then receive data out as a bitmap encoded how you please.


Firefox on Arch Linux with a Vega 64 card + i7 4770k @ 3.5 ghz:

CPU: 0.955s ±0.5% GPU: 0.003s ±4.3% (295.22 times faster!)

Seems like decent speedup.

It's a bit scary to have a web page have access to my GPU memory though, how does that work really? Does firefox clear the GPU memory before using it for tasks like this?


Similar (FF + Ubuntu 1804, GTX 1080Ti, i7 4790K):

CPU: 0.863s ±1.4%

GPU: 0.003s ±4.0% (279.34 times faster!)

I believe the library is using WebGL with buffers; browsers can make use of hardware for rendering and this library simply takes advantage of that


Just like JS clears CPU memory, webgl clears GPU memory before access.


250x faster than CPU on the Samsung Note 9, but what's really interesting is that it's 380x faster when I turn off all the power saving modes.


This is pretty cool. I don't know its practical applications. Honestly I don't even care. It's just very nice to see such tinkering. Great job.


Debian 9 x86_64/NVIDIA Corporation GK107 [NVS 510] hung like never before too.. Happend after clicking Enabled and running the test.


When will they have FPGA accelerated JavaScript?


this is a joke right?


WARNING anyone trying. Crashed my MBP running 10.14.3 with FF 65.0.1. Lost some of my work


Welp, this blew up my 2017 MBP (10.14.3). Hanged after about 5 seconds of running.


GPU 491 times faster. Something's iffy, but I love the idea.


I went to Plaid!


Works fine in Chrome. Crashes Firefox.


Crashed my tab real nice.


Tensorflow.js is pretty good.


Will this solve the problem of my linter running Babel and its hundred plugins completely destroying my CPU just so I can get access to spreads?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: