Hacker News new | past | comments | ask | show | jobs | submit login

I have mixed feelings on this. On one hand, websites that should be just documents are turning into resource-intensive applications, which is frustrating. Generally I do not want most websites using these features.

On the other hand, the web as a runtime is one of the most universal platforms we have. If I run Slack in a browser, I sure do want it to be able to notify me! (Though I wish doing so felt more like the Electron app - I’d rather use my browser than Electron if the UX were similar enough. I already have a browser, I don’t need another per application!)

And as an occasional user of less popular mobile platforms (Ubuntu Touch, etc) a more featureful web runtime helps close the gap between those platforms and Android/iOS.

I do think it’d be better if these two use cases were more explicitly distinct than they are now, though.




It's almost like we shouldn't have tortured a document transfer system until it was able to run full-on applications.


If operating systems had ever provided security strong enough to run untrusted code, we wouldn't have needed to. It's really a failing of operating system security. Even today no operating system provides an application sandbox as strong and versatile as the browser.

Platforms have fallen back on walled gardens (app stores) with centralized control of all code execution to compensate for the deficiencies of their security models. I don't want a future where the only apps I can run have to be signed and approved by Apple ahead of time. The web is the escape hatch.


The whole security angle is an after-the-fact rationalization and the vast majority of users (not developers, users) never cared much about it - as anyone in the IT department of pretty much every company can tell you (again, users, not developers, not sysops, not admins, but regular end users).

The real reason we see so many web apps is that the decision on what technology to use for an application is largely something developers decide (with "developers" i do not only mean individuals but also companies and organizations that develop applications) and the web gives developers pretty much complete control over what is going on in their application, what the users can do and how, allows them to force everyone use the same version, allows realtime monitoring of how the users use their applications and provides the vendor locking tightest than the most obfuscated native application could have (since all the data is stored in the servers).

Security is just a very convenient excuse since a lot of people shut down their critical thinking whenever it comes up (my guess for why that happens is that since a lot of people have been shamed by supposedly security experts and even more people have mimicked that shaming, we ended up with everyone just shutting up whenever security is brought up to avoid looking like That Clueless One that would be shamed next - but that is just a guess).

But the real reason is the heavy control that web apps give to the developers and the vast majority of users do not really understand how biased against them that setup is.


Have you looked at the network traffic or behavior of most native apps? Continuous monitoring and cloud-based state are the norm everywhere, and for the same reasons.


Not all apps do that (in fact i cannot think of any desktop application that i have installed that does something like this - though if i could, it would get the boot) and because applications run locally it is possible to monitor and control their behavior (indeed, with a server you just don't know what is going on, but with a local app you can at least tell that something is going on).

Also, while the UI is far from ideal (at least on Windows), you can block individual applications from accessing the Internet. It should be much simpler than it is now, though.


There is an another interesting property of web applications: the can't be stolen. I guess, this could also be a reason of their ubiquity.


What do you mean with stolen? Someone making something similar or making unlicensed copies? For the former i do not see how web apps prevent that and for the latter that only matters when the developer asks for money for each license, which is extremely rare with web apps anyway (the closest is subscriptions and rentals but that happens with desktop/native applications too, e.g. see Adobe and Autodesk). But even that is really a facet of the heavily developer biased control i mentioned that web apps provide.


But people still download and run apps every day. Some even prefer it. I doubt that the cause of this migration was due to security concerns -- since when are app developers particularly concerned about the quality of the security controls imposed on them? I suspect the shift was more due to ease of access, both by the user and for the developer, helped along by easier compatibility.


App developers care about the barrier to getting users to use their app. The barrier to getting a user to click on a weblink is much, much lower than the barrier to getting them to install an app.


This is one of the largest failings of the App Store providers. They should recognize this installation barrier and work towards fast and ephemeral app installs.


I think it's great that they don't do a good job there, because it gives a big advantage to a cross-platform open-source open-standards alternative.

Though Android does try to address this with Google Play Instant (https://developer.android.com/topic/google-play-instant). I've never encountered it in the wild though.


The two main App Store providers already have products that provide fast, ephemeral app installs: Safari and Chrome.


Users care about security (at least a lot). That's why they don't install every random app that they could.


Right, but users can only use an app once it has been developed, and ultimately if the user needs an app, they will go with whatever format it's being distributed in. Then there's also the convenience factor — Gmail is a good example of this — where a web app is more convenient than a downloadable app that does the same thing, because it requires no installation or updates.

I would also suspect that most users haven't any clue of the security implications of using something in a browser versus using an app.


Well, my wife is an exception to that. She installs all kinds of crap apps on her phone.

And we got our phones through our daughter who works at Verizon, so when my wife moved from an Android to an iPhone they called me up and asked for my iCloud password which I, like a dumbass, gave them.

I just checked and I've got four more bullshit apps on my phone I need to delete :D


> If operating systems had ever provided security strong enough to run untrusted code

The browser hasn't either. See all the recent intel bugs and how many can be exploited from javascript.


Sure, running untrusted code will never be 100% safe. But visiting a random website is 99% safe where running random .exe’s is 1% safe. Neither is perfect, but in practice, one is good enough for most situations and the other isn’t.


Depends if you dockerise...

I agree, it's better, but the idea that you can just happily run whatever in the browser and it's all fine isn't quite true either.

But it is a lot better now than it used to be.

When it comes down to it - why am I exposed to untrusted code if what I'm trying to do, for the most part, is just browse and read info?

Perhaps we should separate browsers-as-app-platforms from browsers-as-readers.


> It's really a failing of operating system security

Please stop the gaslighting. Virtual machines and sandboxes existed decades before browsers.


It counts whether the platform makes it easy to use and promotes it in a way so that average users actually use it.

If I create a native Windows app and link people the .exe to try it out, approximately 0% of people who run it are going to run it in a securely sandboxed way. If I create a web app and link people it to try it out, approximately 100% of people who run it are going to run it in a securely sandboxed way.

Furthermore, some people will specifically avoid trying out the .exe I send them because they don't trust me fully with everything on their computer. As a developer that wants to show off things I make, I don't want this obstacle to exist. If I make a web app instead, I know it's more likely people will try it out.


Virtual machines per app are a thing, but they have a lot of disadvantages (particularly resource consumption). Various sandboxes have existed for a long time, but I’m curious which ones you would point out that have survived anywhere near the scrutiny and attack surface that browsers have.


Is letting Google, Facebook or Apple control every aspect of our lives by sending them all our personal data, contacts and network information voluntary more secure?

I would rather run every application in its own VM under a different unprivileged user

P.s. the browser is the main attack vector on mobile, not only because it's so complex that bugs are everywhere, but mostly because web app security sucks


> the browser is the main attack vector on mobile

Citation needed. Browser-based attacks are difficult to come by, because it's just easier to attack an application instead.


>Citation needed. Browser-based attacks are difficult to come by, because it's just easier to attack an application instead.

You require a citation then make an assertion with with no supporting source.


Browsers usually have fairly strong security models, since they are expected to constantly run untrusted code (which has nothing to do with web app security, FWIW). Apps rarely get this kind of scrutiny and often don't (Android) or can't (iOS) employ features that browsers can to do, such as multiple processes.


OS constantly execute untrusted code as well.

Many game engines do it as well, just think about DOOM mods

I think the point is that browsers are not as good as an OS as an application platform (given the limitations) but are as complex as an OS and have more bugs

The fact that mobile apps are terrible is not an excuse for having a terrible document protocol used for applications

Apple invented mobile apps as we know them today,but native apps in general have served people well for ages

We are at a point where a native app with some API is more maintainable than a browser app

Not even talking about the ecosystem and its fiascos, like npm corrupted libs used by millions without even looking at a single line of source code or the famous leftpad incident

It doesn't really matter where the weakness is, if it is exploitable

As Alan Kay once said "the web was made by amateurs at best"


Would it really be better if we didn't evolve the web, and we had more OS-specific unsandboxed applications and greater fragmentation between OSes (and a stronger incentive for everyone to stick to one OS, like Windows) instead?


Yes, OS-specific applications are good, and we should have more of them. By "sticking to one OS", and apps that follow the OS's conventions, users actually have a chance to develop expertise in those apps. But the web ensures that everyone stays an amateur.

While website capabilities have evolved, the web UI itself has regressed. No major UI paradigms have emerged from the web since the 90s (except tabs, arguably). URLs, bookmarks, cmd-F Find, and clicking on links still dominate, except they work worse now compared to the 90s, because of SPAs and lazy loading.


My point is that if the web didn't evolve, then Windows would have further cemented its position. If people made Windows apps instead of websites, then other operating systems and mobile wouldn't have taken off as much because fewer people would make multiple apps than the number of people in our world who made cross-platform web apps.

Also, all of those features that rkagerer complained about would be even more abusable, because in general, Windows apps don't have to ask for permissions to those things. I don't get how someone could complain that it's bad for a web app to be able to ask for permissions to their contacts, and would prefer to have a native app (that can get them silently by default).

Maybe you could replace "Windows" with "iOS" in this hypothetical, which would improve the permissions side of things, but I think it's likely that Windows was only supplantable in the first place because of the popularity of things being on the cross-platform web instead of on native Windows apps, and especially as someone without an iOS device, I'd be pretty sour if the effort on the web went purely into a locked-down non-open platform I didn't have. I think the way the web has approached being a universal open-source/open-standard app platform is extremely exciting. The fact that web app buttons look different than iOS/etc buttons is a small price compared to the benefits, and is the sort of thing that can probably be solved within the model once developers think it's important enough to. (I think modern frameworks and/or the web components standard will provide a good base to get more native-like experiences common in the web.)


No OS has "taken off" because of the web. Windows is as cemented on desktop as ever, and ChromeOS (the only post-web desktop OS of note) has a fraction of a percent marketshare.

iOS initially had a web-app-only developer story (the "sweet solution"), but the quality gap between web and native apps was so undeniable that Apple reversed course and shipped a native SDK. It would be no different today.

The quality gap is not about buttons that look different. It's that nothing behaves consistently. Every site does its own custom thing, so users are forced into the lowest common denominator of interactivity (click or tap).

Examples: Gmail has its own fake windows, context menus, dropdowns, drag and drop, key equivalents, etc. and they all fall apart as soon as you try to do anything nontrivial with them. And it's been like this for 15 years, so I don't see any cause for optimism on this front.


If Wine got a tenth of the effort that goes toward Firefox, the whole Windows lock-in issue would have been solved ages ago.


That's a huge false dichotomy.


Yes. OS-specific native apps are generally a much better experience for the user.


As opposed to a strong incentive to stick to Chrome?


Chrome is open-source, cross-platform, built on open standards, securely sandboxes code run inside it, and it's interoperable with other web browsers. Considering operating systems and web browsers as app platforms, I think it's better and safer for people to use apps on app platforms that are built on open standards, that are able to freely run on many and free-and-open-source devices, and that securely sandbox apps so they don't have full access to people's data by default. (Android comes close to meeting these criteria, but my main complaints would be the difficulty in running Android apps on non-Android devices, the fact that apps are harder to run than visiting a URL, the lack of alternate implementations of the platform, Google's full control over the platform APIs, and Google's de-facto control over distributing apps.)


>Chrome is open-source, cross-platform, built on open standards, securely sandboxes code run inside it, and it's interoperable with other web browsers. Considering operating systems and web browsers as app platforms, I think it's better and safer for people to use apps on app platforms that are built on open standards, that are able to freely run on many and free-and-open-source devices, and that securely sandbox apps so they don't have full access to people's data by default. (Android comes close to meeting these criteria, but my main complaints would be the difficulty in running Android apps on non-Android devices, the fact that apps are harder to run than visiting a URL, the lack of alternate implementations of the platform, Google's full control over the platform APIs, and Google's de-facto control over distributing apps.)

A lot of your argument here is based on the assumption Google Chrome is open source software. Google Chrome is not open source, it's proprietary.


The browser is such a great place as far as running an application goes almost universally.

I remember when applications had to be tied to the OS, does it run on unix, linux, some other OS, or hardware... what a pain.

If it is a web app it probabbly works most places.


ISWYM but I'd be much, much happier if the browser functionality of mainly delivering text and images was kept separate from the application functionality of ... doing bloody everything.

I want to turn off the application side of things for my safety (and I do), but too many sites require it unnecessarily to do the most basic tasks of displaying static text and pictures.


I kind of wish major browsers would show a banner asking to enable JS, like they did for Flash and Java. This would discourage developers from using JS unless they really need it, and they'll think about graceful degradation.

But browsers won't, because they have nothing to gain from it; and it would be way too confusing for many users.


I think that is more about how whomever wants to offer a site wants to offer their content, not the browser itself.


> I remember when applications had to be tied to the OS, does it run on unix, linux, some other OS, or hardware... what a pain.

And now many are tied to one or two browsers.


I think there are strong Gall's-Law, worse-is-better and business reasons why a proper GUI platform could never have crossed the chasm to universally cross-platform, on-demand delivery, with no single vendor owning it.


It's just the amorphousness of life seeping in. People want more things in more contexts, which will always expand scope of the things they use.


The web was originally conceived as an interactive hypermedia platform, not a “document transfer system”.

It's true that it has displaced things they were true document or file transfer systems (Gopher, FTP) because it subsumes those functionalities, but it wasn't ever just that.


"I have mixed feelings on this. On one hand, websites that should be just documents are turning into resource-intensive applications, which is frustrating."

I agree but I would be ok with them if they required explicit permission from the user before being allowed.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: